I0409 21:07:32.179716 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0409 21:07:32.180024 6 e2e.go:109] Starting e2e run "e9e6a23f-503e-4b5e-837b-2eb5a691fd7e" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586466451 - Will randomize all specs Will run 278 of 4842 specs Apr 9 21:07:32.240: INFO: >>> kubeConfig: /root/.kube/config Apr 9 21:07:32.245: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 9 21:07:32.279: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 9 21:07:32.321: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 9 21:07:32.321: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 9 21:07:32.321: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 9 21:07:32.333: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 9 21:07:32.333: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 9 21:07:32.333: INFO: e2e test version: v1.17.4 Apr 9 21:07:32.335: INFO: kube-apiserver version: v1.17.2 Apr 9 21:07:32.335: INFO: >>> kubeConfig: /root/.kube/config Apr 9 21:07:32.340: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:07:32.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Apr 9 21:07:32.396: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:07:33.520: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:07:35.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063253, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063253, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063253, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063253, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:07:38.558: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:07:38.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5042" for this suite. STEP: Destroying namespace "webhook-5042-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:07:38.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3411/configmap-test-b58f46bd-edb2-4739-9e7a-d0a07fd6da3b STEP: Creating a pod to test consume configMaps Apr 9 21:07:38.877: INFO: Waiting up to 5m0s for pod "pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad" in namespace "configmap-3411" to be "success or failure" Apr 9 21:07:38.891: INFO: Pod "pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad": Phase="Pending", Reason="", readiness=false. Elapsed: 14.195058ms Apr 9 21:07:40.896: INFO: Pod "pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018627675s Apr 9 21:07:42.900: INFO: Pod "pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022633942s STEP: Saw pod success Apr 9 21:07:42.900: INFO: Pod "pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad" satisfied condition "success or failure" Apr 9 21:07:42.903: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad container env-test: STEP: delete the pod Apr 9 21:07:42.955: INFO: Waiting for pod pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad to disappear Apr 9 21:07:42.969: INFO: Pod pod-configmaps-904488ef-7f01-4f9d-8ede-8b912b27e0ad no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:07:42.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3411" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:07:42.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-63f92811-7db0-4af8-9ce1-c5ba97fc2fb7 STEP: Creating a pod to test consume configMaps Apr 9 21:07:43.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06" in namespace "configmap-8537" to be "success or failure" Apr 9 21:07:43.095: INFO: Pod "pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.704485ms Apr 9 21:07:45.099: INFO: Pod "pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008376662s Apr 9 21:07:47.103: INFO: Pod "pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526276s STEP: Saw pod success Apr 9 21:07:47.103: INFO: Pod "pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06" satisfied condition "success or failure" Apr 9 21:07:47.106: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06 container configmap-volume-test: STEP: delete the pod Apr 9 21:07:47.171: INFO: Waiting for pod pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06 to disappear Apr 9 21:07:47.175: INFO: Pod pod-configmaps-2195be66-85ae-4c69-a398-778d3e50fe06 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:07:47.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8537" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":41,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:07:47.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9175 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9175 STEP: Creating statefulset with conflicting port in namespace statefulset-9175 STEP: Waiting until pod test-pod will start running in namespace statefulset-9175 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9175 Apr 9 21:07:53.309: INFO: Observed stateful pod in namespace: statefulset-9175, name: ss-0, uid: b5d501be-2ff4-43d8-b4ec-4fa6119090db, status phase: Pending. Waiting for statefulset controller to delete. Apr 9 21:07:53.727: INFO: Observed stateful pod in namespace: statefulset-9175, name: ss-0, uid: b5d501be-2ff4-43d8-b4ec-4fa6119090db, status phase: Failed. Waiting for statefulset controller to delete. Apr 9 21:07:53.764: INFO: Observed stateful pod in namespace: statefulset-9175, name: ss-0, uid: b5d501be-2ff4-43d8-b4ec-4fa6119090db, status phase: Failed. Waiting for statefulset controller to delete. Apr 9 21:07:53.779: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9175 STEP: Removing pod with conflicting port in namespace statefulset-9175 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9175 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 21:07:57.858: INFO: Deleting all statefulset in ns statefulset-9175 Apr 9 21:07:57.861: INFO: Scaling statefulset ss to 0 Apr 9 21:08:17.898: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:08:17.901: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9175" for this suite. • [SLOW TEST:30.740 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":4,"skipped":47,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:17.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:08:17.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5118' Apr 9 21:08:20.373: INFO: stderr: "" Apr 9 21:08:20.373: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 9 21:08:20.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5118' Apr 9 21:08:20.647: INFO: stderr: "" Apr 9 21:08:20.647: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 9 21:08:21.651: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:08:21.651: INFO: Found 0 / 1 Apr 9 21:08:22.651: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:08:22.651: INFO: Found 0 / 1 Apr 9 21:08:23.651: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:08:23.651: INFO: Found 1 / 1 Apr 9 21:08:23.651: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 9 21:08:23.653: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:08:23.653: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 9 21:08:23.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-j4v4v --namespace=kubectl-5118' Apr 9 21:08:23.759: INFO: stderr: "" Apr 9 21:08:23.759: INFO: stdout: "Name: agnhost-master-j4v4v\nNamespace: kubectl-5118\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Thu, 09 Apr 2020 21:08:20 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.129\nIPs:\n IP: 10.244.2.129\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://6542abc577165f160dd125851b671d4ff59f16a8f90cf9650542770e1624336d\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 09 Apr 2020 21:08:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hdhfm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hdhfm:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hdhfm\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-5118/agnhost-master-j4v4v to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 9 21:08:23.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5118' Apr 9 21:08:23.874: INFO: stderr: "" Apr 9 21:08:23.874: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5118\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-j4v4v\n" Apr 9 21:08:23.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5118' Apr 9 21:08:23.994: INFO: stderr: "" Apr 9 21:08:23.994: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5118\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.107.222.193\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.129:6379\nSession Affinity: None\nEvents: \n" Apr 9 21:08:23.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 9 21:08:24.128: INFO: stderr: "" Apr 9 21:08:24.128: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 09 Apr 2020 21:08:16 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 09 Apr 2020 21:07:28 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 09 Apr 2020 21:07:28 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 09 Apr 2020 21:07:28 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 09 Apr 2020 21:07:28 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 25d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 9 21:08:24.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5118' Apr 9 21:08:24.229: INFO: stderr: "" Apr 9 21:08:24.229: INFO: stdout: "Name: kubectl-5118\nLabels: e2e-framework=kubectl\n e2e-run=e9e6a23f-503e-4b5e-837b-2eb5a691fd7e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:24.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5118" for this suite. • [SLOW TEST:6.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":5,"skipped":47,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:24.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 9 21:08:24.291: INFO: Waiting up to 5m0s for pod "pod-74f33011-6833-4d98-b673-03379be71582" in namespace "emptydir-9506" to be "success or failure" Apr 9 21:08:24.295: INFO: Pod "pod-74f33011-6833-4d98-b673-03379be71582": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556965ms Apr 9 21:08:26.299: INFO: Pod "pod-74f33011-6833-4d98-b673-03379be71582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073204s Apr 9 21:08:28.302: INFO: Pod "pod-74f33011-6833-4d98-b673-03379be71582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01052048s STEP: Saw pod success Apr 9 21:08:28.302: INFO: Pod "pod-74f33011-6833-4d98-b673-03379be71582" satisfied condition "success or failure" Apr 9 21:08:28.304: INFO: Trying to get logs from node jerma-worker pod pod-74f33011-6833-4d98-b673-03379be71582 container test-container: STEP: delete the pod Apr 9 21:08:28.326: INFO: Waiting for pod pod-74f33011-6833-4d98-b673-03379be71582 to disappear Apr 9 21:08:28.330: INFO: Pod pod-74f33011-6833-4d98-b673-03379be71582 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:28.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9506" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:28.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:39.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4750" for this suite. • [SLOW TEST:11.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":7,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:39.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 9 21:08:39.633: INFO: Waiting up to 5m0s for pod "pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8" in namespace "emptydir-9043" to be "success or failure" Apr 9 21:08:39.715: INFO: Pod "pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8": Phase="Pending", Reason="", readiness=false. Elapsed: 81.422949ms Apr 9 21:08:41.718: INFO: Pod "pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085121192s Apr 9 21:08:43.722: INFO: Pod "pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088574216s STEP: Saw pod success Apr 9 21:08:43.722: INFO: Pod "pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8" satisfied condition "success or failure" Apr 9 21:08:43.724: INFO: Trying to get logs from node jerma-worker2 pod pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8 container test-container: STEP: delete the pod Apr 9 21:08:43.759: INFO: Waiting for pod pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8 to disappear Apr 9 21:08:43.828: INFO: Pod pod-1ec2d877-2ec3-42f3-a0a8-535e6348afb8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:43.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9043" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":112,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:43.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 9 21:08:43.896: INFO: Waiting up to 5m0s for pod "client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a" in namespace "containers-326" to be "success or failure" Apr 9 21:08:43.911: INFO: Pod "client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.618591ms Apr 9 21:08:45.946: INFO: Pod "client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050608617s Apr 9 21:08:47.950: INFO: Pod "client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054013965s STEP: Saw pod success Apr 9 21:08:47.950: INFO: Pod "client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a" satisfied condition "success or failure" Apr 9 21:08:47.952: INFO: Trying to get logs from node jerma-worker pod client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a container test-container: STEP: delete the pod Apr 9 21:08:47.986: INFO: Waiting for pod client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a to disappear Apr 9 21:08:48.001: INFO: Pod client-containers-d021a530-452c-4e9c-8223-ca3201b7a67a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-326" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:48.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:08:48.553: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:08:50.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063328, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063328, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063328, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063328, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:08:53.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:53.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4822" for this suite. STEP: Destroying namespace "webhook-4822-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.698 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":10,"skipped":142,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:53.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-31efebc7-2de1-422f-9db4-c4551c3d924c STEP: Creating a pod to test consume secrets Apr 9 21:08:53.824: INFO: Waiting up to 5m0s for pod "pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e" in namespace "secrets-8648" to be "success or failure" Apr 9 21:08:53.841: INFO: Pod "pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.04444ms Apr 9 21:08:55.859: INFO: Pod "pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034734332s Apr 9 21:08:57.883: INFO: Pod "pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058564343s STEP: Saw pod success Apr 9 21:08:57.883: INFO: Pod "pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e" satisfied condition "success or failure" Apr 9 21:08:57.885: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e container secret-volume-test: STEP: delete the pod Apr 9 21:08:57.916: INFO: Waiting for pod pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e to disappear Apr 9 21:08:57.930: INFO: Pod pod-secrets-8d61fdf5-21d1-4a75-baf4-4d2b84bfc92e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:08:57.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8648" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":146,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:08:57.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 21:08:58.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7621' Apr 9 21:08:58.190: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 9 21:08:58.190: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 9 21:08:58.238: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-rhlm9] Apr 9 21:08:58.238: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-rhlm9" in namespace "kubectl-7621" to be "running and ready" Apr 9 21:08:58.260: INFO: Pod "e2e-test-httpd-rc-rhlm9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.956215ms Apr 9 21:09:00.277: INFO: Pod "e2e-test-httpd-rc-rhlm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039542506s Apr 9 21:09:02.282: INFO: Pod "e2e-test-httpd-rc-rhlm9": Phase="Running", Reason="", readiness=true. Elapsed: 4.044067152s Apr 9 21:09:02.282: INFO: Pod "e2e-test-httpd-rc-rhlm9" satisfied condition "running and ready" Apr 9 21:09:02.282: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-rhlm9] Apr 9 21:09:02.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7621' Apr 9 21:09:02.397: INFO: stderr: "" Apr 9 21:09:02.397: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.46. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.46. Set the 'ServerName' directive globally to suppress this message\n[Thu Apr 09 21:09:00.625926 2020] [mpm_event:notice] [pid 1:tid 140528249092968] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Apr 09 21:09:00.625985 2020] [core:notice] [pid 1:tid 140528249092968] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 9 21:09:02.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7621' Apr 9 21:09:02.504: INFO: stderr: "" Apr 9 21:09:02.504: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:09:02.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7621" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":12,"skipped":161,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:09:02.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:09:02.627: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02" in namespace "security-context-test-9938" to be "success or failure" Apr 9 21:09:02.631: INFO: Pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02": Phase="Pending", Reason="", readiness=false. Elapsed: 3.528214ms Apr 9 21:09:04.634: INFO: Pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007285374s Apr 9 21:09:06.638: INFO: Pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011264109s Apr 9 21:09:06.638: INFO: Pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02" satisfied condition "success or failure" Apr 9 21:09:06.645: INFO: Got logs for pod "busybox-privileged-false-d6c2fa42-cd64-441f-8830-6804aff2ae02": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:09:06.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9938" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":161,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:09:06.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 9 21:09:06.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766495 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 9 21:09:06.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766495 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 9 21:09:16.714: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766548 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 9 21:09:16.714: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766548 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 9 21:09:26.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766578 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 9 21:09:26.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766578 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 9 21:09:36.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766608 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 9 21:09:36.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-a 53d20d3c-625f-4da9-b21f-b14d4792e730 6766608 0 2020-04-09 21:09:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 9 21:09:46.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-b ea53f23a-7eb3-449a-a0ba-5126efd915d6 6766638 0 2020-04-09 21:09:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 9 21:09:46.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-b ea53f23a-7eb3-449a-a0ba-5126efd915d6 6766638 0 2020-04-09 21:09:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 9 21:09:56.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-b ea53f23a-7eb3-449a-a0ba-5126efd915d6 6766668 0 2020-04-09 21:09:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 9 21:09:56.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1665 /api/v1/namespaces/watch-1665/configmaps/e2e-watch-test-configmap-b ea53f23a-7eb3-449a-a0ba-5126efd915d6 6766668 0 2020-04-09 21:09:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:06.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1665" for this suite. • [SLOW TEST:60.102 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":14,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:06.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 21:10:12.865: INFO: DNS probes using dns-212/dns-test-11daef63-a0d0-4387-a1dc-3f8e6cacc1de succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:12.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-212" for this suite. • [SLOW TEST:6.193 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":15,"skipped":189,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:12.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:29.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9242" for this suite. • [SLOW TEST:16.101 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":16,"skipped":204,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:29.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:29.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8421" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":17,"skipped":208,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:29.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 9 21:10:29.282: INFO: Waiting up to 5m0s for pod "downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf" in namespace "downward-api-3349" to be "success or failure" Apr 9 21:10:29.291: INFO: Pod "downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.20295ms Apr 9 21:10:31.303: INFO: Pod "downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021036728s Apr 9 21:10:33.308: INFO: Pod "downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02530904s STEP: Saw pod success Apr 9 21:10:33.308: INFO: Pod "downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf" satisfied condition "success or failure" Apr 9 21:10:33.311: INFO: Trying to get logs from node jerma-worker pod downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf container dapi-container: STEP: delete the pod Apr 9 21:10:33.335: INFO: Waiting for pod downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf to disappear Apr 9 21:10:33.339: INFO: Pod downward-api-5404c672-d7b9-4dd3-8b81-41f5471c82bf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:33.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3349" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:33.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 9 21:10:33.419: INFO: Waiting up to 5m0s for pod "pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea" in namespace "emptydir-9530" to be "success or failure" Apr 9 21:10:33.435: INFO: Pod "pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 16.592764ms Apr 9 21:10:35.440: INFO: Pod "pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02120956s Apr 9 21:10:37.443: INFO: Pod "pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024696388s STEP: Saw pod success Apr 9 21:10:37.444: INFO: Pod "pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea" satisfied condition "success or failure" Apr 9 21:10:37.446: INFO: Trying to get logs from node jerma-worker pod pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea container test-container: STEP: delete the pod Apr 9 21:10:37.461: INFO: Waiting for pod pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea to disappear Apr 9 21:10:37.477: INFO: Pod pod-9c530b8b-40f8-43d2-8064-89d2a747b8ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:10:37.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9530" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":235,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:10:37.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-mxk7 STEP: Creating a pod to test atomic-volume-subpath Apr 9 21:10:37.575: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mxk7" in namespace "subpath-9760" to be "success or failure" Apr 9 21:10:37.579: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.990747ms Apr 9 21:10:39.583: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008313759s Apr 9 21:10:41.587: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 4.012136837s Apr 9 21:10:43.592: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 6.016493764s Apr 9 21:10:45.595: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 8.020367461s Apr 9 21:10:47.600: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 10.024540682s Apr 9 21:10:49.604: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 12.028518643s Apr 9 21:10:51.608: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 14.032614864s Apr 9 21:10:53.611: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 16.036315892s Apr 9 21:10:55.615: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 18.040144319s Apr 9 21:10:57.619: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 20.043792575s Apr 9 21:10:59.622: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Running", Reason="", readiness=true. Elapsed: 22.046990024s Apr 9 21:11:01.627: INFO: Pod "pod-subpath-test-projected-mxk7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052350719s STEP: Saw pod success Apr 9 21:11:01.627: INFO: Pod "pod-subpath-test-projected-mxk7" satisfied condition "success or failure" Apr 9 21:11:01.630: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-mxk7 container test-container-subpath-projected-mxk7: STEP: delete the pod Apr 9 21:11:01.677: INFO: Waiting for pod pod-subpath-test-projected-mxk7 to disappear Apr 9 21:11:01.681: INFO: Pod pod-subpath-test-projected-mxk7 no longer exists STEP: Deleting pod pod-subpath-test-projected-mxk7 Apr 9 21:11:01.681: INFO: Deleting pod "pod-subpath-test-projected-mxk7" in namespace "subpath-9760" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:11:01.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9760" for this suite. • [SLOW TEST:24.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":20,"skipped":241,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:11:01.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:11:01.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2" in namespace "projected-3985" to be "success or failure" Apr 9 21:11:01.753: INFO: Pod "downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124446ms Apr 9 21:11:03.759: INFO: Pod "downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009910188s Apr 9 21:11:05.763: INFO: Pod "downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013737895s STEP: Saw pod success Apr 9 21:11:05.763: INFO: Pod "downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2" satisfied condition "success or failure" Apr 9 21:11:05.766: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2 container client-container: STEP: delete the pod Apr 9 21:11:05.813: INFO: Waiting for pod downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2 to disappear Apr 9 21:11:05.819: INFO: Pod downwardapi-volume-b82981b0-85db-455c-a494-7dfa261f54b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:11:05.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3985" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":250,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:11:05.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-14b6842d-7d44-4b44-88b3-0673a1137621 in namespace container-probe-2866 Apr 9 21:11:09.872: INFO: Started pod test-webserver-14b6842d-7d44-4b44-88b3-0673a1137621 in namespace container-probe-2866 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 21:11:09.875: INFO: Initial restart count of pod test-webserver-14b6842d-7d44-4b44-88b3-0673a1137621 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:15:10.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2866" for this suite. • [SLOW TEST:244.634 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:15:10.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-4e6a3b76-d8fe-49b0-a343-7c4d67da0b6f STEP: Creating a pod to test consume configMaps Apr 9 21:15:10.544: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b" in namespace "projected-570" to be "success or failure" Apr 9 21:15:10.555: INFO: Pod "pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177828ms Apr 9 21:15:12.643: INFO: Pod "pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09814747s Apr 9 21:15:14.647: INFO: Pod "pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102629457s STEP: Saw pod success Apr 9 21:15:14.647: INFO: Pod "pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b" satisfied condition "success or failure" Apr 9 21:15:14.650: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:15:14.688: INFO: Waiting for pod pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b to disappear Apr 9 21:15:14.711: INFO: Pod pod-projected-configmaps-468e4f13-b022-4fbe-849a-4fcc2d25ad2b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:15:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-570" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":285,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:15:14.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 9 21:15:14.820: INFO: Waiting up to 5m0s for pod "pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94" in namespace "emptydir-4947" to be "success or failure" Apr 9 21:15:14.837: INFO: Pod "pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94": Phase="Pending", Reason="", readiness=false. Elapsed: 17.777229ms Apr 9 21:15:16.841: INFO: Pod "pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021639117s Apr 9 21:15:18.846: INFO: Pod "pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026010292s STEP: Saw pod success Apr 9 21:15:18.846: INFO: Pod "pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94" satisfied condition "success or failure" Apr 9 21:15:18.849: INFO: Trying to get logs from node jerma-worker pod pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94 container test-container: STEP: delete the pod Apr 9 21:15:18.880: INFO: Waiting for pod pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94 to disappear Apr 9 21:15:18.890: INFO: Pod pod-a752f8c6-6c71-4522-8d46-686cbf4f8a94 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:15:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4947" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:15:18.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:15:18.980: INFO: Creating deployment "test-recreate-deployment" Apr 9 21:15:18.984: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 9 21:15:19.011: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 9 21:15:21.018: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 9 21:15:21.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063719, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063719, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063719, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063718, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 21:15:23.025: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 9 21:15:23.032: INFO: Updating deployment test-recreate-deployment Apr 9 21:15:23.032: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 9 21:15:23.503: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4887 /apis/apps/v1/namespaces/deployment-4887/deployments/test-recreate-deployment e8d2608b-2270-46fd-85c5-bb5bb4d8219a 6767892 2 2020-04-09 21:15:18 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001bef878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-09 21:15:23 +0000 UTC,LastTransitionTime:2020-04-09 21:15:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-09 21:15:23 +0000 UTC,LastTransitionTime:2020-04-09 21:15:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 9 21:15:23.519: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4887 /apis/apps/v1/namespaces/deployment-4887/replicasets/test-recreate-deployment-5f94c574ff cfc37840-2d20-40d3-a266-85bdf2a853db 6767889 1 2020-04-09 21:15:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment e8d2608b-2270-46fd-85c5-bb5bb4d8219a 0xc001beff67 0xc001beff68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fba068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 21:15:23.519: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 9 21:15:23.519: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4887 /apis/apps/v1/namespaces/deployment-4887/replicasets/test-recreate-deployment-799c574856 38219529-939c-4f1b-a9dd-6cec82dcf3da 6767879 2 2020-04-09 21:15:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment e8d2608b-2270-46fd-85c5-bb5bb4d8219a 0xc001fba0e7 0xc001fba0e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fba198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 21:15:23.524: INFO: Pod "test-recreate-deployment-5f94c574ff-xcr7n" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-xcr7n test-recreate-deployment-5f94c574ff- deployment-4887 /api/v1/namespaces/deployment-4887/pods/test-recreate-deployment-5f94c574ff-xcr7n a8ae4cd0-be33-45d6-9255-f6a2bb149166 6767894 0 2020-04-09 21:15:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff cfc37840-2d20-40d3-a266-85bdf2a853db 0xc001fba797 0xc001fba798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nws8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nws8g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nws8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:15:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:15:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:15:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:15:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 21:15:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:15:23.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4887" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":25,"skipped":304,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:15:23.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1510 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1510 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1510 Apr 9 21:15:23.634: INFO: Found 0 stateful pods, waiting for 1 Apr 9 21:15:33.639: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 9 21:15:33.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:15:33.898: INFO: stderr: "I0409 21:15:33.763970 257 log.go:172] (0xc0007d8b00) (0xc0007d21e0) Create stream\nI0409 21:15:33.764030 257 log.go:172] (0xc0007d8b00) (0xc0007d21e0) Stream added, broadcasting: 1\nI0409 21:15:33.767816 257 log.go:172] (0xc0007d8b00) Reply frame received for 1\nI0409 21:15:33.767873 257 log.go:172] (0xc0007d8b00) (0xc00076b180) Create stream\nI0409 21:15:33.767909 257 log.go:172] (0xc0007d8b00) (0xc00076b180) Stream added, broadcasting: 3\nI0409 21:15:33.769336 257 log.go:172] (0xc0007d8b00) Reply frame received for 3\nI0409 21:15:33.769377 257 log.go:172] (0xc0007d8b00) (0xc0007d2320) Create stream\nI0409 21:15:33.769389 257 log.go:172] (0xc0007d8b00) (0xc0007d2320) Stream added, broadcasting: 5\nI0409 21:15:33.770379 257 log.go:172] (0xc0007d8b00) Reply frame received for 5\nI0409 21:15:33.851939 257 log.go:172] (0xc0007d8b00) Data frame received for 5\nI0409 21:15:33.851967 257 log.go:172] (0xc0007d2320) (5) Data frame handling\nI0409 21:15:33.851981 257 log.go:172] (0xc0007d2320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:15:33.890196 257 log.go:172] (0xc0007d8b00) Data frame received for 3\nI0409 21:15:33.890245 257 log.go:172] (0xc00076b180) (3) Data frame handling\nI0409 21:15:33.890312 257 log.go:172] (0xc00076b180) (3) Data frame sent\nI0409 21:15:33.890342 257 log.go:172] (0xc0007d8b00) Data frame received for 3\nI0409 21:15:33.890359 257 log.go:172] (0xc00076b180) (3) Data frame handling\nI0409 21:15:33.890384 257 log.go:172] (0xc0007d8b00) Data frame received for 5\nI0409 21:15:33.890401 257 log.go:172] (0xc0007d2320) (5) Data frame handling\nI0409 21:15:33.892398 257 log.go:172] (0xc0007d8b00) Data frame received for 1\nI0409 21:15:33.892435 257 log.go:172] (0xc0007d21e0) (1) Data frame handling\nI0409 21:15:33.892456 257 log.go:172] (0xc0007d21e0) (1) Data frame sent\nI0409 21:15:33.892478 257 log.go:172] (0xc0007d8b00) (0xc0007d21e0) Stream removed, broadcasting: 1\nI0409 21:15:33.892503 257 log.go:172] (0xc0007d8b00) Go away received\nI0409 21:15:33.892928 257 log.go:172] (0xc0007d8b00) (0xc0007d21e0) Stream removed, broadcasting: 1\nI0409 21:15:33.892966 257 log.go:172] (0xc0007d8b00) (0xc00076b180) Stream removed, broadcasting: 3\nI0409 21:15:33.892991 257 log.go:172] (0xc0007d8b00) (0xc0007d2320) Stream removed, broadcasting: 5\n" Apr 9 21:15:33.898: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:15:33.898: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:15:33.902: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 9 21:15:43.906: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:15:43.906: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:15:43.919: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999427s Apr 9 21:15:44.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995861801s Apr 9 21:15:45.928: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991599855s Apr 9 21:15:46.933: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986698444s Apr 9 21:15:47.936: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982172729s Apr 9 21:15:48.941: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978684181s Apr 9 21:15:49.945: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974028646s Apr 9 21:15:50.950: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969854936s Apr 9 21:15:51.954: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964912384s Apr 9 21:15:52.958: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.110002ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1510 Apr 9 21:15:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:15:54.196: INFO: stderr: "I0409 21:15:54.108450 280 log.go:172] (0xc000b80000) (0xc0008fa000) Create stream\nI0409 21:15:54.108522 280 log.go:172] (0xc000b80000) (0xc0008fa000) Stream added, broadcasting: 1\nI0409 21:15:54.110881 280 log.go:172] (0xc000b80000) Reply frame received for 1\nI0409 21:15:54.110934 280 log.go:172] (0xc000b80000) (0xc0008fa0a0) Create stream\nI0409 21:15:54.110950 280 log.go:172] (0xc000b80000) (0xc0008fa0a0) Stream added, broadcasting: 3\nI0409 21:15:54.111959 280 log.go:172] (0xc000b80000) Reply frame received for 3\nI0409 21:15:54.112006 280 log.go:172] (0xc000b80000) (0xc000ab8000) Create stream\nI0409 21:15:54.112022 280 log.go:172] (0xc000b80000) (0xc000ab8000) Stream added, broadcasting: 5\nI0409 21:15:54.112997 280 log.go:172] (0xc000b80000) Reply frame received for 5\nI0409 21:15:54.189532 280 log.go:172] (0xc000b80000) Data frame received for 3\nI0409 21:15:54.189555 280 log.go:172] (0xc0008fa0a0) (3) Data frame handling\nI0409 21:15:54.189567 280 log.go:172] (0xc0008fa0a0) (3) Data frame sent\nI0409 21:15:54.189881 280 log.go:172] (0xc000b80000) Data frame received for 5\nI0409 21:15:54.189896 280 log.go:172] (0xc000ab8000) (5) Data frame handling\nI0409 21:15:54.189902 280 log.go:172] (0xc000ab8000) (5) Data frame sent\nI0409 21:15:54.189907 280 log.go:172] (0xc000b80000) Data frame received for 5\nI0409 21:15:54.189911 280 log.go:172] (0xc000ab8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:15:54.189951 280 log.go:172] (0xc000b80000) Data frame received for 3\nI0409 21:15:54.189984 280 log.go:172] (0xc0008fa0a0) (3) Data frame handling\nI0409 21:15:54.191216 280 log.go:172] (0xc000b80000) Data frame received for 1\nI0409 21:15:54.191227 280 log.go:172] (0xc0008fa000) (1) Data frame handling\nI0409 21:15:54.191235 280 log.go:172] (0xc0008fa000) (1) Data frame sent\nI0409 21:15:54.191369 280 log.go:172] (0xc000b80000) (0xc0008fa000) Stream removed, broadcasting: 1\nI0409 21:15:54.191443 280 log.go:172] (0xc000b80000) Go away received\nI0409 21:15:54.191608 280 log.go:172] (0xc000b80000) (0xc0008fa000) Stream removed, broadcasting: 1\nI0409 21:15:54.191622 280 log.go:172] (0xc000b80000) (0xc0008fa0a0) Stream removed, broadcasting: 3\nI0409 21:15:54.191628 280 log.go:172] (0xc000b80000) (0xc000ab8000) Stream removed, broadcasting: 5\n" Apr 9 21:15:54.197: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:15:54.197: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:15:54.201: INFO: Found 1 stateful pods, waiting for 3 Apr 9 21:16:04.206: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:16:04.207: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:16:04.207: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 9 21:16:04.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:16:04.475: INFO: stderr: "I0409 21:16:04.370896 301 log.go:172] (0xc0001182c0) (0xc000aca000) Create stream\nI0409 21:16:04.370968 301 log.go:172] (0xc0001182c0) (0xc000aca000) Stream added, broadcasting: 1\nI0409 21:16:04.374138 301 log.go:172] (0xc0001182c0) Reply frame received for 1\nI0409 21:16:04.374194 301 log.go:172] (0xc0001182c0) (0xc000776000) Create stream\nI0409 21:16:04.374211 301 log.go:172] (0xc0001182c0) (0xc000776000) Stream added, broadcasting: 3\nI0409 21:16:04.375347 301 log.go:172] (0xc0001182c0) Reply frame received for 3\nI0409 21:16:04.375402 301 log.go:172] (0xc0001182c0) (0xc000aca0a0) Create stream\nI0409 21:16:04.375426 301 log.go:172] (0xc0001182c0) (0xc000aca0a0) Stream added, broadcasting: 5\nI0409 21:16:04.376499 301 log.go:172] (0xc0001182c0) Reply frame received for 5\nI0409 21:16:04.456954 301 log.go:172] (0xc0001182c0) Data frame received for 5\nI0409 21:16:04.456999 301 log.go:172] (0xc000aca0a0) (5) Data frame handling\nI0409 21:16:04.457024 301 log.go:172] (0xc000aca0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:16:04.457067 301 log.go:172] (0xc0001182c0) Data frame received for 3\nI0409 21:16:04.457085 301 log.go:172] (0xc000776000) (3) Data frame handling\nI0409 21:16:04.457104 301 log.go:172] (0xc000776000) (3) Data frame sent\nI0409 21:16:04.457283 301 log.go:172] (0xc0001182c0) Data frame received for 3\nI0409 21:16:04.457306 301 log.go:172] (0xc000776000) (3) Data frame handling\nI0409 21:16:04.457502 301 log.go:172] (0xc0001182c0) Data frame received for 5\nI0409 21:16:04.457603 301 log.go:172] (0xc000aca0a0) (5) Data frame handling\nI0409 21:16:04.471468 301 log.go:172] (0xc0001182c0) Data frame received for 1\nI0409 21:16:04.471489 301 log.go:172] (0xc000aca000) (1) Data frame handling\nI0409 21:16:04.471497 301 log.go:172] (0xc000aca000) (1) Data frame sent\nI0409 21:16:04.471504 301 log.go:172] (0xc0001182c0) (0xc000aca000) Stream removed, broadcasting: 1\nI0409 21:16:04.471512 301 log.go:172] (0xc0001182c0) Go away received\nI0409 21:16:04.471864 301 log.go:172] (0xc0001182c0) (0xc000aca000) Stream removed, broadcasting: 1\nI0409 21:16:04.471876 301 log.go:172] (0xc0001182c0) (0xc000776000) Stream removed, broadcasting: 3\nI0409 21:16:04.471881 301 log.go:172] (0xc0001182c0) (0xc000aca0a0) Stream removed, broadcasting: 5\n" Apr 9 21:16:04.475: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:16:04.475: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:16:04.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:16:04.710: INFO: stderr: "I0409 21:16:04.605977 321 log.go:172] (0xc0007da000) (0xc0008fc8c0) Create stream\nI0409 21:16:04.606056 321 log.go:172] (0xc0007da000) (0xc0008fc8c0) Stream added, broadcasting: 1\nI0409 21:16:04.608863 321 log.go:172] (0xc0007da000) Reply frame received for 1\nI0409 21:16:04.608918 321 log.go:172] (0xc0007da000) (0xc0008fc960) Create stream\nI0409 21:16:04.608932 321 log.go:172] (0xc0007da000) (0xc0008fc960) Stream added, broadcasting: 3\nI0409 21:16:04.609909 321 log.go:172] (0xc0007da000) Reply frame received for 3\nI0409 21:16:04.609946 321 log.go:172] (0xc0007da000) (0xc0008fca00) Create stream\nI0409 21:16:04.609958 321 log.go:172] (0xc0007da000) (0xc0008fca00) Stream added, broadcasting: 5\nI0409 21:16:04.610789 321 log.go:172] (0xc0007da000) Reply frame received for 5\nI0409 21:16:04.674009 321 log.go:172] (0xc0007da000) Data frame received for 5\nI0409 21:16:04.674054 321 log.go:172] (0xc0008fca00) (5) Data frame handling\nI0409 21:16:04.674098 321 log.go:172] (0xc0008fca00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:16:04.703151 321 log.go:172] (0xc0007da000) Data frame received for 3\nI0409 21:16:04.703165 321 log.go:172] (0xc0008fc960) (3) Data frame handling\nI0409 21:16:04.703182 321 log.go:172] (0xc0008fc960) (3) Data frame sent\nI0409 21:16:04.703192 321 log.go:172] (0xc0007da000) Data frame received for 3\nI0409 21:16:04.703199 321 log.go:172] (0xc0008fc960) (3) Data frame handling\nI0409 21:16:04.703220 321 log.go:172] (0xc0007da000) Data frame received for 5\nI0409 21:16:04.703230 321 log.go:172] (0xc0008fca00) (5) Data frame handling\nI0409 21:16:04.704978 321 log.go:172] (0xc0007da000) Data frame received for 1\nI0409 21:16:04.704991 321 log.go:172] (0xc0008fc8c0) (1) Data frame handling\nI0409 21:16:04.704997 321 log.go:172] (0xc0008fc8c0) (1) Data frame sent\nI0409 21:16:04.705295 321 log.go:172] (0xc0007da000) (0xc0008fc8c0) Stream removed, broadcasting: 1\nI0409 21:16:04.705367 321 log.go:172] (0xc0007da000) Go away received\nI0409 21:16:04.705711 321 log.go:172] (0xc0007da000) (0xc0008fc8c0) Stream removed, broadcasting: 1\nI0409 21:16:04.705738 321 log.go:172] (0xc0007da000) (0xc0008fc960) Stream removed, broadcasting: 3\nI0409 21:16:04.705767 321 log.go:172] (0xc0007da000) (0xc0008fca00) Stream removed, broadcasting: 5\n" Apr 9 21:16:04.710: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:16:04.711: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:16:04.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:16:04.955: INFO: stderr: "I0409 21:16:04.843261 341 log.go:172] (0xc0000f56b0) (0xc0005afcc0) Create stream\nI0409 21:16:04.843306 341 log.go:172] (0xc0000f56b0) (0xc0005afcc0) Stream added, broadcasting: 1\nI0409 21:16:04.845584 341 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0409 21:16:04.845607 341 log.go:172] (0xc0000f56b0) (0xc0007b5400) Create stream\nI0409 21:16:04.845614 341 log.go:172] (0xc0000f56b0) (0xc0007b5400) Stream added, broadcasting: 3\nI0409 21:16:04.846422 341 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0409 21:16:04.846459 341 log.go:172] (0xc0000f56b0) (0xc0005afd60) Create stream\nI0409 21:16:04.846473 341 log.go:172] (0xc0000f56b0) (0xc0005afd60) Stream added, broadcasting: 5\nI0409 21:16:04.847328 341 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0409 21:16:04.920215 341 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0409 21:16:04.920236 341 log.go:172] (0xc0005afd60) (5) Data frame handling\nI0409 21:16:04.920249 341 log.go:172] (0xc0005afd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:16:04.948729 341 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0409 21:16:04.948776 341 log.go:172] (0xc0005afd60) (5) Data frame handling\nI0409 21:16:04.948805 341 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0409 21:16:04.948828 341 log.go:172] (0xc0007b5400) (3) Data frame handling\nI0409 21:16:04.948848 341 log.go:172] (0xc0007b5400) (3) Data frame sent\nI0409 21:16:04.949012 341 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0409 21:16:04.949036 341 log.go:172] (0xc0007b5400) (3) Data frame handling\nI0409 21:16:04.950852 341 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0409 21:16:04.950876 341 log.go:172] (0xc0005afcc0) (1) Data frame handling\nI0409 21:16:04.950888 341 log.go:172] (0xc0005afcc0) (1) Data frame sent\nI0409 21:16:04.950904 341 log.go:172] (0xc0000f56b0) (0xc0005afcc0) Stream removed, broadcasting: 1\nI0409 21:16:04.950928 341 log.go:172] (0xc0000f56b0) Go away received\nI0409 21:16:04.951236 341 log.go:172] (0xc0000f56b0) (0xc0005afcc0) Stream removed, broadcasting: 1\nI0409 21:16:04.951270 341 log.go:172] (0xc0000f56b0) (0xc0007b5400) Stream removed, broadcasting: 3\nI0409 21:16:04.951279 341 log.go:172] (0xc0000f56b0) (0xc0005afd60) Stream removed, broadcasting: 5\n" Apr 9 21:16:04.955: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:16:04.955: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:16:04.955: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:16:04.959: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 9 21:16:14.967: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:16:14.967: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:16:14.967: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:16:14.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999705s Apr 9 21:16:16.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97453657s Apr 9 21:16:17.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969062049s Apr 9 21:16:18.012: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963669853s Apr 9 21:16:19.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959706722s Apr 9 21:16:20.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954797617s Apr 9 21:16:21.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949845404s Apr 9 21:16:22.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944895059s Apr 9 21:16:23.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940265015s Apr 9 21:16:24.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.563601ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1510 Apr 9 21:16:25.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:16:25.255: INFO: stderr: "I0409 21:16:25.178135 363 log.go:172] (0xc000aaa000) (0xc0007554a0) Create stream\nI0409 21:16:25.178183 363 log.go:172] (0xc000aaa000) (0xc0007554a0) Stream added, broadcasting: 1\nI0409 21:16:25.180551 363 log.go:172] (0xc000aaa000) Reply frame received for 1\nI0409 21:16:25.180599 363 log.go:172] (0xc000aaa000) (0xc000912000) Create stream\nI0409 21:16:25.180615 363 log.go:172] (0xc000aaa000) (0xc000912000) Stream added, broadcasting: 3\nI0409 21:16:25.181934 363 log.go:172] (0xc000aaa000) Reply frame received for 3\nI0409 21:16:25.181963 363 log.go:172] (0xc000aaa000) (0xc000a86000) Create stream\nI0409 21:16:25.181970 363 log.go:172] (0xc000aaa000) (0xc000a86000) Stream added, broadcasting: 5\nI0409 21:16:25.182937 363 log.go:172] (0xc000aaa000) Reply frame received for 5\nI0409 21:16:25.248805 363 log.go:172] (0xc000aaa000) Data frame received for 5\nI0409 21:16:25.248865 363 log.go:172] (0xc000aaa000) Data frame received for 3\nI0409 21:16:25.248908 363 log.go:172] (0xc000912000) (3) Data frame handling\nI0409 21:16:25.248928 363 log.go:172] (0xc000912000) (3) Data frame sent\nI0409 21:16:25.248940 363 log.go:172] (0xc000aaa000) Data frame received for 3\nI0409 21:16:25.248955 363 log.go:172] (0xc000912000) (3) Data frame handling\nI0409 21:16:25.248983 363 log.go:172] (0xc000a86000) (5) Data frame handling\nI0409 21:16:25.249003 363 log.go:172] (0xc000a86000) (5) Data frame sent\nI0409 21:16:25.249017 363 log.go:172] (0xc000aaa000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:16:25.249027 363 log.go:172] (0xc000a86000) (5) Data frame handling\nI0409 21:16:25.250732 363 log.go:172] (0xc000aaa000) Data frame received for 1\nI0409 21:16:25.250856 363 log.go:172] (0xc0007554a0) (1) Data frame handling\nI0409 21:16:25.250887 363 log.go:172] (0xc0007554a0) (1) Data frame sent\nI0409 21:16:25.250905 363 log.go:172] (0xc000aaa000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0409 21:16:25.250930 363 log.go:172] (0xc000aaa000) Go away received\nI0409 21:16:25.251264 363 log.go:172] (0xc000aaa000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0409 21:16:25.251287 363 log.go:172] (0xc000aaa000) (0xc000912000) Stream removed, broadcasting: 3\nI0409 21:16:25.251304 363 log.go:172] (0xc000aaa000) (0xc000a86000) Stream removed, broadcasting: 5\n" Apr 9 21:16:25.255: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:16:25.255: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:16:25.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:16:25.440: INFO: stderr: "I0409 21:16:25.376848 383 log.go:172] (0xc000946630) (0xc00064fb80) Create stream\nI0409 21:16:25.376905 383 log.go:172] (0xc000946630) (0xc00064fb80) Stream added, broadcasting: 1\nI0409 21:16:25.379827 383 log.go:172] (0xc000946630) Reply frame received for 1\nI0409 21:16:25.379891 383 log.go:172] (0xc000946630) (0xc00064fd60) Create stream\nI0409 21:16:25.379918 383 log.go:172] (0xc000946630) (0xc00064fd60) Stream added, broadcasting: 3\nI0409 21:16:25.381070 383 log.go:172] (0xc000946630) Reply frame received for 3\nI0409 21:16:25.381105 383 log.go:172] (0xc000946630) (0xc00091e000) Create stream\nI0409 21:16:25.381216 383 log.go:172] (0xc000946630) (0xc00091e000) Stream added, broadcasting: 5\nI0409 21:16:25.382128 383 log.go:172] (0xc000946630) Reply frame received for 5\nI0409 21:16:25.433043 383 log.go:172] (0xc000946630) Data frame received for 3\nI0409 21:16:25.433087 383 log.go:172] (0xc00064fd60) (3) Data frame handling\nI0409 21:16:25.433104 383 log.go:172] (0xc00064fd60) (3) Data frame sent\nI0409 21:16:25.433254 383 log.go:172] (0xc000946630) Data frame received for 3\nI0409 21:16:25.433288 383 log.go:172] (0xc000946630) Data frame received for 5\nI0409 21:16:25.433330 383 log.go:172] (0xc00091e000) (5) Data frame handling\nI0409 21:16:25.433351 383 log.go:172] (0xc00091e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:16:25.433377 383 log.go:172] (0xc00064fd60) (3) Data frame handling\nI0409 21:16:25.433441 383 log.go:172] (0xc000946630) Data frame received for 5\nI0409 21:16:25.433475 383 log.go:172] (0xc00091e000) (5) Data frame handling\nI0409 21:16:25.435082 383 log.go:172] (0xc000946630) Data frame received for 1\nI0409 21:16:25.435112 383 log.go:172] (0xc00064fb80) (1) Data frame handling\nI0409 21:16:25.435124 383 log.go:172] (0xc00064fb80) (1) Data frame sent\nI0409 21:16:25.435137 383 log.go:172] (0xc000946630) (0xc00064fb80) Stream removed, broadcasting: 1\nI0409 21:16:25.435154 383 log.go:172] (0xc000946630) Go away received\nI0409 21:16:25.435589 383 log.go:172] (0xc000946630) (0xc00064fb80) Stream removed, broadcasting: 1\nI0409 21:16:25.435612 383 log.go:172] (0xc000946630) (0xc00064fd60) Stream removed, broadcasting: 3\nI0409 21:16:25.435624 383 log.go:172] (0xc000946630) (0xc00091e000) Stream removed, broadcasting: 5\n" Apr 9 21:16:25.440: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:16:25.440: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:16:25.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1510 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:16:25.649: INFO: stderr: "I0409 21:16:25.588407 404 log.go:172] (0xc000996000) (0xc00072a000) Create stream\nI0409 21:16:25.588460 404 log.go:172] (0xc000996000) (0xc00072a000) Stream added, broadcasting: 1\nI0409 21:16:25.590414 404 log.go:172] (0xc000996000) Reply frame received for 1\nI0409 21:16:25.590463 404 log.go:172] (0xc000996000) (0xc000916000) Create stream\nI0409 21:16:25.590478 404 log.go:172] (0xc000996000) (0xc000916000) Stream added, broadcasting: 3\nI0409 21:16:25.591203 404 log.go:172] (0xc000996000) Reply frame received for 3\nI0409 21:16:25.591238 404 log.go:172] (0xc000996000) (0xc00072a0a0) Create stream\nI0409 21:16:25.591247 404 log.go:172] (0xc000996000) (0xc00072a0a0) Stream added, broadcasting: 5\nI0409 21:16:25.591912 404 log.go:172] (0xc000996000) Reply frame received for 5\nI0409 21:16:25.641674 404 log.go:172] (0xc000996000) Data frame received for 3\nI0409 21:16:25.641703 404 log.go:172] (0xc000916000) (3) Data frame handling\nI0409 21:16:25.641724 404 log.go:172] (0xc000916000) (3) Data frame sent\nI0409 21:16:25.641735 404 log.go:172] (0xc000996000) Data frame received for 3\nI0409 21:16:25.641743 404 log.go:172] (0xc000916000) (3) Data frame handling\nI0409 21:16:25.641923 404 log.go:172] (0xc000996000) Data frame received for 5\nI0409 21:16:25.641951 404 log.go:172] (0xc00072a0a0) (5) Data frame handling\nI0409 21:16:25.641962 404 log.go:172] (0xc00072a0a0) (5) Data frame sent\nI0409 21:16:25.641973 404 log.go:172] (0xc000996000) Data frame received for 5\nI0409 21:16:25.641987 404 log.go:172] (0xc00072a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:16:25.643565 404 log.go:172] (0xc000996000) Data frame received for 1\nI0409 21:16:25.643592 404 log.go:172] (0xc00072a000) (1) Data frame handling\nI0409 21:16:25.643604 404 log.go:172] (0xc00072a000) (1) Data frame sent\nI0409 21:16:25.643811 404 log.go:172] (0xc000996000) (0xc00072a000) Stream removed, broadcasting: 1\nI0409 21:16:25.644040 404 log.go:172] (0xc000996000) Go away received\nI0409 21:16:25.644311 404 log.go:172] (0xc000996000) (0xc00072a000) Stream removed, broadcasting: 1\nI0409 21:16:25.644349 404 log.go:172] (0xc000996000) (0xc000916000) Stream removed, broadcasting: 3\nI0409 21:16:25.644375 404 log.go:172] (0xc000996000) (0xc00072a0a0) Stream removed, broadcasting: 5\n" Apr 9 21:16:25.649: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:16:25.649: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:16:25.649: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 21:16:55.674: INFO: Deleting all statefulset in ns statefulset-1510 Apr 9 21:16:55.677: INFO: Scaling statefulset ss to 0 Apr 9 21:16:55.686: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:16:55.689: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:16:55.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1510" for this suite. • [SLOW TEST:92.178 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":26,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:16:55.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:08.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8309" for this suite. • [SLOW TEST:13.183 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":27,"skipped":369,"failed":0} [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:08.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 9 21:17:12.978: INFO: &Pod{ObjectMeta:{send-events-a649ddfb-2d3f-458d-8dd8-7e7a6a9679d0 events-5053 /api/v1/namespaces/events-5053/pods/send-events-a649ddfb-2d3f-458d-8dd8-7e7a6a9679d0 da04a26f-68e7-40bf-8e80-53c59aea972f 6768473 0 2020-04-09 21:17:08 +0000 UTC map[name:foo time:942213498] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zn5zt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zn5zt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zn5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:17:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:17:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 21:17:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.54,StartTime:2020-04-09 21:17:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 21:17:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://f0b902a4d7c8e093edd5307b706473db315a91c4f291642c24debaf4a04273f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 9 21:17:14.982: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 9 21:17:16.986: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:16.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5053" for this suite. • [SLOW TEST:8.138 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":28,"skipped":369,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:17.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:21.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9767" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":382,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:21.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:25.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4340" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:25.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-502ea7a0-64a1-4fcc-8359-8becee2e69f8 STEP: Creating a pod to test consume configMaps Apr 9 21:17:25.295: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733" in namespace "projected-2733" to be "success or failure" Apr 9 21:17:25.298: INFO: Pod "pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733": Phase="Pending", Reason="", readiness=false. Elapsed: 3.677055ms Apr 9 21:17:27.302: INFO: Pod "pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007005987s Apr 9 21:17:29.306: INFO: Pod "pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01114924s STEP: Saw pod success Apr 9 21:17:29.306: INFO: Pod "pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733" satisfied condition "success or failure" Apr 9 21:17:29.308: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733 container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:17:29.348: INFO: Waiting for pod pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733 to disappear Apr 9 21:17:29.358: INFO: Pod pod-projected-configmaps-e35164f9-ff02-42f3-8437-142b9d115733 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:29.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2733" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":400,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:29.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6077" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":403,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:33.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:17:33.786: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:17:35.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063853, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063853, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063853, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063853, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:17:38.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:17:38.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9706-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:40.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6978" for this suite. STEP: Destroying namespace "webhook-6978-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.644 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":33,"skipped":408,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:40.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 9 21:17:50.249: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.249: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.285778 6 log.go:172] (0xc001775ad0) (0xc0027ef040) Create stream I0409 21:17:50.285809 6 log.go:172] (0xc001775ad0) (0xc0027ef040) Stream added, broadcasting: 1 I0409 21:17:50.287863 6 log.go:172] (0xc001775ad0) Reply frame received for 1 I0409 21:17:50.287937 6 log.go:172] (0xc001775ad0) (0xc002772000) Create stream I0409 21:17:50.287964 6 log.go:172] (0xc001775ad0) (0xc002772000) Stream added, broadcasting: 3 I0409 21:17:50.289031 6 log.go:172] (0xc001775ad0) Reply frame received for 3 I0409 21:17:50.289080 6 log.go:172] (0xc001775ad0) (0xc0027720a0) Create stream I0409 21:17:50.289106 6 log.go:172] (0xc001775ad0) (0xc0027720a0) Stream added, broadcasting: 5 I0409 21:17:50.290231 6 log.go:172] (0xc001775ad0) Reply frame received for 5 I0409 21:17:50.360893 6 log.go:172] (0xc001775ad0) Data frame received for 5 I0409 21:17:50.360980 6 log.go:172] (0xc0027720a0) (5) Data frame handling I0409 21:17:50.361013 6 log.go:172] (0xc001775ad0) Data frame received for 3 I0409 21:17:50.361030 6 log.go:172] (0xc002772000) (3) Data frame handling I0409 21:17:50.361073 6 log.go:172] (0xc002772000) (3) Data frame sent I0409 21:17:50.361085 6 log.go:172] (0xc001775ad0) Data frame received for 3 I0409 21:17:50.361091 6 log.go:172] (0xc002772000) (3) Data frame handling I0409 21:17:50.362936 6 log.go:172] (0xc001775ad0) Data frame received for 1 I0409 21:17:50.362956 6 log.go:172] (0xc0027ef040) (1) Data frame handling I0409 21:17:50.362967 6 log.go:172] (0xc0027ef040) (1) Data frame sent I0409 21:17:50.362981 6 log.go:172] (0xc001775ad0) (0xc0027ef040) Stream removed, broadcasting: 1 I0409 21:17:50.362998 6 log.go:172] (0xc001775ad0) Go away received I0409 21:17:50.363297 6 log.go:172] (0xc001775ad0) (0xc0027ef040) Stream removed, broadcasting: 1 I0409 21:17:50.363321 6 log.go:172] (0xc001775ad0) (0xc002772000) Stream removed, broadcasting: 3 I0409 21:17:50.363339 6 log.go:172] (0xc001775ad0) (0xc0027720a0) Stream removed, broadcasting: 5 Apr 9 21:17:50.363: INFO: Exec stderr: "" Apr 9 21:17:50.363: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.363: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.392961 6 log.go:172] (0xc002142370) (0xc001dd0320) Create stream I0409 21:17:50.392991 6 log.go:172] (0xc002142370) (0xc001dd0320) Stream added, broadcasting: 1 I0409 21:17:50.394979 6 log.go:172] (0xc002142370) Reply frame received for 1 I0409 21:17:50.395009 6 log.go:172] (0xc002142370) (0xc0027ef220) Create stream I0409 21:17:50.395020 6 log.go:172] (0xc002142370) (0xc0027ef220) Stream added, broadcasting: 3 I0409 21:17:50.395910 6 log.go:172] (0xc002142370) Reply frame received for 3 I0409 21:17:50.395954 6 log.go:172] (0xc002142370) (0xc002772140) Create stream I0409 21:17:50.395983 6 log.go:172] (0xc002142370) (0xc002772140) Stream added, broadcasting: 5 I0409 21:17:50.396989 6 log.go:172] (0xc002142370) Reply frame received for 5 I0409 21:17:50.475737 6 log.go:172] (0xc002142370) Data frame received for 3 I0409 21:17:50.475774 6 log.go:172] (0xc0027ef220) (3) Data frame handling I0409 21:17:50.475791 6 log.go:172] (0xc0027ef220) (3) Data frame sent I0409 21:17:50.475805 6 log.go:172] (0xc002142370) Data frame received for 3 I0409 21:17:50.475817 6 log.go:172] (0xc0027ef220) (3) Data frame handling I0409 21:17:50.475874 6 log.go:172] (0xc002142370) Data frame received for 5 I0409 21:17:50.475932 6 log.go:172] (0xc002772140) (5) Data frame handling I0409 21:17:50.477074 6 log.go:172] (0xc002142370) Data frame received for 1 I0409 21:17:50.477092 6 log.go:172] (0xc001dd0320) (1) Data frame handling I0409 21:17:50.477100 6 log.go:172] (0xc001dd0320) (1) Data frame sent I0409 21:17:50.477227 6 log.go:172] (0xc002142370) (0xc001dd0320) Stream removed, broadcasting: 1 I0409 21:17:50.477250 6 log.go:172] (0xc002142370) Go away received I0409 21:17:50.477335 6 log.go:172] (0xc002142370) (0xc001dd0320) Stream removed, broadcasting: 1 I0409 21:17:50.477374 6 log.go:172] (0xc002142370) (0xc0027ef220) Stream removed, broadcasting: 3 I0409 21:17:50.477434 6 log.go:172] (0xc002142370) (0xc002772140) Stream removed, broadcasting: 5 Apr 9 21:17:50.477: INFO: Exec stderr: "" Apr 9 21:17:50.477: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.477: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.511544 6 log.go:172] (0xc001b04370) (0xc00296aaa0) Create stream I0409 21:17:50.511574 6 log.go:172] (0xc001b04370) (0xc00296aaa0) Stream added, broadcasting: 1 I0409 21:17:50.513688 6 log.go:172] (0xc001b04370) Reply frame received for 1 I0409 21:17:50.513750 6 log.go:172] (0xc001b04370) (0xc00296ab40) Create stream I0409 21:17:50.513771 6 log.go:172] (0xc001b04370) (0xc00296ab40) Stream added, broadcasting: 3 I0409 21:17:50.514779 6 log.go:172] (0xc001b04370) Reply frame received for 3 I0409 21:17:50.514822 6 log.go:172] (0xc001b04370) (0xc0027721e0) Create stream I0409 21:17:50.514837 6 log.go:172] (0xc001b04370) (0xc0027721e0) Stream added, broadcasting: 5 I0409 21:17:50.515893 6 log.go:172] (0xc001b04370) Reply frame received for 5 I0409 21:17:50.583115 6 log.go:172] (0xc001b04370) Data frame received for 5 I0409 21:17:50.583160 6 log.go:172] (0xc0027721e0) (5) Data frame handling I0409 21:17:50.583190 6 log.go:172] (0xc001b04370) Data frame received for 3 I0409 21:17:50.583208 6 log.go:172] (0xc00296ab40) (3) Data frame handling I0409 21:17:50.583228 6 log.go:172] (0xc00296ab40) (3) Data frame sent I0409 21:17:50.583245 6 log.go:172] (0xc001b04370) Data frame received for 3 I0409 21:17:50.583261 6 log.go:172] (0xc00296ab40) (3) Data frame handling I0409 21:17:50.584604 6 log.go:172] (0xc001b04370) Data frame received for 1 I0409 21:17:50.584625 6 log.go:172] (0xc00296aaa0) (1) Data frame handling I0409 21:17:50.584636 6 log.go:172] (0xc00296aaa0) (1) Data frame sent I0409 21:17:50.584653 6 log.go:172] (0xc001b04370) (0xc00296aaa0) Stream removed, broadcasting: 1 I0409 21:17:50.584745 6 log.go:172] (0xc001b04370) (0xc00296aaa0) Stream removed, broadcasting: 1 I0409 21:17:50.584765 6 log.go:172] (0xc001b04370) (0xc00296ab40) Stream removed, broadcasting: 3 I0409 21:17:50.584810 6 log.go:172] (0xc001b04370) Go away received I0409 21:17:50.584941 6 log.go:172] (0xc001b04370) (0xc0027721e0) Stream removed, broadcasting: 5 Apr 9 21:17:50.584: INFO: Exec stderr: "" Apr 9 21:17:50.584: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.585: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.613839 6 log.go:172] (0xc001d0a4d0) (0xc002772460) Create stream I0409 21:17:50.613869 6 log.go:172] (0xc001d0a4d0) (0xc002772460) Stream added, broadcasting: 1 I0409 21:17:50.615980 6 log.go:172] (0xc001d0a4d0) Reply frame received for 1 I0409 21:17:50.616025 6 log.go:172] (0xc001d0a4d0) (0xc00296abe0) Create stream I0409 21:17:50.616040 6 log.go:172] (0xc001d0a4d0) (0xc00296abe0) Stream added, broadcasting: 3 I0409 21:17:50.616917 6 log.go:172] (0xc001d0a4d0) Reply frame received for 3 I0409 21:17:50.616950 6 log.go:172] (0xc001d0a4d0) (0xc002294000) Create stream I0409 21:17:50.616967 6 log.go:172] (0xc001d0a4d0) (0xc002294000) Stream added, broadcasting: 5 I0409 21:17:50.618078 6 log.go:172] (0xc001d0a4d0) Reply frame received for 5 I0409 21:17:50.670938 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 21:17:50.670983 6 log.go:172] (0xc00296abe0) (3) Data frame handling I0409 21:17:50.671005 6 log.go:172] (0xc00296abe0) (3) Data frame sent I0409 21:17:50.671026 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 21:17:50.671043 6 log.go:172] (0xc00296abe0) (3) Data frame handling I0409 21:17:50.671062 6 log.go:172] (0xc001d0a4d0) Data frame received for 5 I0409 21:17:50.671085 6 log.go:172] (0xc002294000) (5) Data frame handling I0409 21:17:50.672567 6 log.go:172] (0xc001d0a4d0) Data frame received for 1 I0409 21:17:50.672606 6 log.go:172] (0xc002772460) (1) Data frame handling I0409 21:17:50.672632 6 log.go:172] (0xc002772460) (1) Data frame sent I0409 21:17:50.672659 6 log.go:172] (0xc001d0a4d0) (0xc002772460) Stream removed, broadcasting: 1 I0409 21:17:50.672695 6 log.go:172] (0xc001d0a4d0) Go away received I0409 21:17:50.672815 6 log.go:172] (0xc001d0a4d0) (0xc002772460) Stream removed, broadcasting: 1 I0409 21:17:50.672833 6 log.go:172] (0xc001d0a4d0) (0xc00296abe0) Stream removed, broadcasting: 3 I0409 21:17:50.672841 6 log.go:172] (0xc001d0a4d0) (0xc002294000) Stream removed, broadcasting: 5 Apr 9 21:17:50.672: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 9 21:17:50.672: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.672: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.705628 6 log.go:172] (0xc000eb8210) (0xc0027ef720) Create stream I0409 21:17:50.705657 6 log.go:172] (0xc000eb8210) (0xc0027ef720) Stream added, broadcasting: 1 I0409 21:17:50.707772 6 log.go:172] (0xc000eb8210) Reply frame received for 1 I0409 21:17:50.707800 6 log.go:172] (0xc000eb8210) (0xc002772500) Create stream I0409 21:17:50.707809 6 log.go:172] (0xc000eb8210) (0xc002772500) Stream added, broadcasting: 3 I0409 21:17:50.708596 6 log.go:172] (0xc000eb8210) Reply frame received for 3 I0409 21:17:50.708621 6 log.go:172] (0xc000eb8210) (0xc0022940a0) Create stream I0409 21:17:50.708630 6 log.go:172] (0xc000eb8210) (0xc0022940a0) Stream added, broadcasting: 5 I0409 21:17:50.709439 6 log.go:172] (0xc000eb8210) Reply frame received for 5 I0409 21:17:50.776654 6 log.go:172] (0xc000eb8210) Data frame received for 5 I0409 21:17:50.776686 6 log.go:172] (0xc0022940a0) (5) Data frame handling I0409 21:17:50.776742 6 log.go:172] (0xc000eb8210) Data frame received for 3 I0409 21:17:50.776791 6 log.go:172] (0xc002772500) (3) Data frame handling I0409 21:17:50.776846 6 log.go:172] (0xc002772500) (3) Data frame sent I0409 21:17:50.776931 6 log.go:172] (0xc000eb8210) Data frame received for 3 I0409 21:17:50.776963 6 log.go:172] (0xc002772500) (3) Data frame handling I0409 21:17:50.778212 6 log.go:172] (0xc000eb8210) Data frame received for 1 I0409 21:17:50.778241 6 log.go:172] (0xc0027ef720) (1) Data frame handling I0409 21:17:50.778258 6 log.go:172] (0xc0027ef720) (1) Data frame sent I0409 21:17:50.778294 6 log.go:172] (0xc000eb8210) (0xc0027ef720) Stream removed, broadcasting: 1 I0409 21:17:50.778403 6 log.go:172] (0xc000eb8210) Go away received I0409 21:17:50.778437 6 log.go:172] (0xc000eb8210) (0xc0027ef720) Stream removed, broadcasting: 1 I0409 21:17:50.778455 6 log.go:172] (0xc000eb8210) (0xc002772500) Stream removed, broadcasting: 3 I0409 21:17:50.778468 6 log.go:172] (0xc000eb8210) (0xc0022940a0) Stream removed, broadcasting: 5 Apr 9 21:17:50.778: INFO: Exec stderr: "" Apr 9 21:17:50.778: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.778: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.811579 6 log.go:172] (0xc000ff0370) (0xc002294320) Create stream I0409 21:17:50.811611 6 log.go:172] (0xc000ff0370) (0xc002294320) Stream added, broadcasting: 1 I0409 21:17:50.814411 6 log.go:172] (0xc000ff0370) Reply frame received for 1 I0409 21:17:50.814472 6 log.go:172] (0xc000ff0370) (0xc0027ef7c0) Create stream I0409 21:17:50.814489 6 log.go:172] (0xc000ff0370) (0xc0027ef7c0) Stream added, broadcasting: 3 I0409 21:17:50.815219 6 log.go:172] (0xc000ff0370) Reply frame received for 3 I0409 21:17:50.815241 6 log.go:172] (0xc000ff0370) (0xc00296ac80) Create stream I0409 21:17:50.815248 6 log.go:172] (0xc000ff0370) (0xc00296ac80) Stream added, broadcasting: 5 I0409 21:17:50.815982 6 log.go:172] (0xc000ff0370) Reply frame received for 5 I0409 21:17:50.893398 6 log.go:172] (0xc000ff0370) Data frame received for 3 I0409 21:17:50.893458 6 log.go:172] (0xc0027ef7c0) (3) Data frame handling I0409 21:17:50.893468 6 log.go:172] (0xc0027ef7c0) (3) Data frame sent I0409 21:17:50.893473 6 log.go:172] (0xc000ff0370) Data frame received for 3 I0409 21:17:50.893481 6 log.go:172] (0xc0027ef7c0) (3) Data frame handling I0409 21:17:50.893507 6 log.go:172] (0xc000ff0370) Data frame received for 5 I0409 21:17:50.893547 6 log.go:172] (0xc00296ac80) (5) Data frame handling I0409 21:17:50.895324 6 log.go:172] (0xc000ff0370) Data frame received for 1 I0409 21:17:50.895364 6 log.go:172] (0xc002294320) (1) Data frame handling I0409 21:17:50.895379 6 log.go:172] (0xc002294320) (1) Data frame sent I0409 21:17:50.895387 6 log.go:172] (0xc000ff0370) (0xc002294320) Stream removed, broadcasting: 1 I0409 21:17:50.895400 6 log.go:172] (0xc000ff0370) Go away received I0409 21:17:50.895499 6 log.go:172] (0xc000ff0370) (0xc002294320) Stream removed, broadcasting: 1 I0409 21:17:50.895521 6 log.go:172] (0xc000ff0370) (0xc0027ef7c0) Stream removed, broadcasting: 3 I0409 21:17:50.895533 6 log.go:172] (0xc000ff0370) (0xc00296ac80) Stream removed, broadcasting: 5 Apr 9 21:17:50.895: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 9 21:17:50.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:50.895: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:50.927781 6 log.go:172] (0xc0021429a0) (0xc001dd08c0) Create stream I0409 21:17:50.927805 6 log.go:172] (0xc0021429a0) (0xc001dd08c0) Stream added, broadcasting: 1 I0409 21:17:50.930524 6 log.go:172] (0xc0021429a0) Reply frame received for 1 I0409 21:17:50.930555 6 log.go:172] (0xc0021429a0) (0xc0022943c0) Create stream I0409 21:17:50.930568 6 log.go:172] (0xc0021429a0) (0xc0022943c0) Stream added, broadcasting: 3 I0409 21:17:50.931545 6 log.go:172] (0xc0021429a0) Reply frame received for 3 I0409 21:17:50.931593 6 log.go:172] (0xc0021429a0) (0xc00296ad20) Create stream I0409 21:17:50.931604 6 log.go:172] (0xc0021429a0) (0xc00296ad20) Stream added, broadcasting: 5 I0409 21:17:50.932559 6 log.go:172] (0xc0021429a0) Reply frame received for 5 I0409 21:17:51.000812 6 log.go:172] (0xc0021429a0) Data frame received for 3 I0409 21:17:51.000850 6 log.go:172] (0xc0022943c0) (3) Data frame handling I0409 21:17:51.000891 6 log.go:172] (0xc0022943c0) (3) Data frame sent I0409 21:17:51.000925 6 log.go:172] (0xc0021429a0) Data frame received for 3 I0409 21:17:51.000947 6 log.go:172] (0xc0022943c0) (3) Data frame handling I0409 21:17:51.000985 6 log.go:172] (0xc0021429a0) Data frame received for 5 I0409 21:17:51.001027 6 log.go:172] (0xc00296ad20) (5) Data frame handling I0409 21:17:51.002481 6 log.go:172] (0xc0021429a0) Data frame received for 1 I0409 21:17:51.002511 6 log.go:172] (0xc001dd08c0) (1) Data frame handling I0409 21:17:51.002533 6 log.go:172] (0xc001dd08c0) (1) Data frame sent I0409 21:17:51.002563 6 log.go:172] (0xc0021429a0) (0xc001dd08c0) Stream removed, broadcasting: 1 I0409 21:17:51.002587 6 log.go:172] (0xc0021429a0) Go away received I0409 21:17:51.002702 6 log.go:172] (0xc0021429a0) (0xc001dd08c0) Stream removed, broadcasting: 1 I0409 21:17:51.002731 6 log.go:172] (0xc0021429a0) (0xc0022943c0) Stream removed, broadcasting: 3 I0409 21:17:51.002743 6 log.go:172] (0xc0021429a0) (0xc00296ad20) Stream removed, broadcasting: 5 Apr 9 21:17:51.002: INFO: Exec stderr: "" Apr 9 21:17:51.002: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:51.002: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:51.039513 6 log.go:172] (0xc001b04b00) (0xc00296afa0) Create stream I0409 21:17:51.039541 6 log.go:172] (0xc001b04b00) (0xc00296afa0) Stream added, broadcasting: 1 I0409 21:17:51.043253 6 log.go:172] (0xc001b04b00) Reply frame received for 1 I0409 21:17:51.043335 6 log.go:172] (0xc001b04b00) (0xc0027725a0) Create stream I0409 21:17:51.043363 6 log.go:172] (0xc001b04b00) (0xc0027725a0) Stream added, broadcasting: 3 I0409 21:17:51.044434 6 log.go:172] (0xc001b04b00) Reply frame received for 3 I0409 21:17:51.044477 6 log.go:172] (0xc001b04b00) (0xc002772640) Create stream I0409 21:17:51.044495 6 log.go:172] (0xc001b04b00) (0xc002772640) Stream added, broadcasting: 5 I0409 21:17:51.045539 6 log.go:172] (0xc001b04b00) Reply frame received for 5 I0409 21:17:51.110319 6 log.go:172] (0xc001b04b00) Data frame received for 5 I0409 21:17:51.110361 6 log.go:172] (0xc002772640) (5) Data frame handling I0409 21:17:51.110392 6 log.go:172] (0xc001b04b00) Data frame received for 3 I0409 21:17:51.110409 6 log.go:172] (0xc0027725a0) (3) Data frame handling I0409 21:17:51.110425 6 log.go:172] (0xc0027725a0) (3) Data frame sent I0409 21:17:51.110436 6 log.go:172] (0xc001b04b00) Data frame received for 3 I0409 21:17:51.110449 6 log.go:172] (0xc0027725a0) (3) Data frame handling I0409 21:17:51.111747 6 log.go:172] (0xc001b04b00) Data frame received for 1 I0409 21:17:51.111789 6 log.go:172] (0xc00296afa0) (1) Data frame handling I0409 21:17:51.111807 6 log.go:172] (0xc00296afa0) (1) Data frame sent I0409 21:17:51.111826 6 log.go:172] (0xc001b04b00) (0xc00296afa0) Stream removed, broadcasting: 1 I0409 21:17:51.111844 6 log.go:172] (0xc001b04b00) Go away received I0409 21:17:51.111980 6 log.go:172] (0xc001b04b00) (0xc00296afa0) Stream removed, broadcasting: 1 I0409 21:17:51.112010 6 log.go:172] (0xc001b04b00) (0xc0027725a0) Stream removed, broadcasting: 3 I0409 21:17:51.112026 6 log.go:172] (0xc001b04b00) (0xc002772640) Stream removed, broadcasting: 5 Apr 9 21:17:51.112: INFO: Exec stderr: "" Apr 9 21:17:51.112: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:51.112: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:51.146576 6 log.go:172] (0xc000ff0630) (0xc002294460) Create stream I0409 21:17:51.146602 6 log.go:172] (0xc000ff0630) (0xc002294460) Stream added, broadcasting: 1 I0409 21:17:51.150084 6 log.go:172] (0xc000ff0630) Reply frame received for 1 I0409 21:17:51.150127 6 log.go:172] (0xc000ff0630) (0xc00296b040) Create stream I0409 21:17:51.150142 6 log.go:172] (0xc000ff0630) (0xc00296b040) Stream added, broadcasting: 3 I0409 21:17:51.151155 6 log.go:172] (0xc000ff0630) Reply frame received for 3 I0409 21:17:51.151206 6 log.go:172] (0xc000ff0630) (0xc00296b0e0) Create stream I0409 21:17:51.151230 6 log.go:172] (0xc000ff0630) (0xc00296b0e0) Stream added, broadcasting: 5 I0409 21:17:51.152336 6 log.go:172] (0xc000ff0630) Reply frame received for 5 I0409 21:17:51.207940 6 log.go:172] (0xc000ff0630) Data frame received for 5 I0409 21:17:51.207972 6 log.go:172] (0xc00296b0e0) (5) Data frame handling I0409 21:17:51.207996 6 log.go:172] (0xc000ff0630) Data frame received for 3 I0409 21:17:51.208017 6 log.go:172] (0xc00296b040) (3) Data frame handling I0409 21:17:51.208027 6 log.go:172] (0xc00296b040) (3) Data frame sent I0409 21:17:51.208038 6 log.go:172] (0xc000ff0630) Data frame received for 3 I0409 21:17:51.208049 6 log.go:172] (0xc00296b040) (3) Data frame handling I0409 21:17:51.209247 6 log.go:172] (0xc000ff0630) Data frame received for 1 I0409 21:17:51.209278 6 log.go:172] (0xc002294460) (1) Data frame handling I0409 21:17:51.209294 6 log.go:172] (0xc002294460) (1) Data frame sent I0409 21:17:51.209307 6 log.go:172] (0xc000ff0630) (0xc002294460) Stream removed, broadcasting: 1 I0409 21:17:51.209409 6 log.go:172] (0xc000ff0630) Go away received I0409 21:17:51.209429 6 log.go:172] (0xc000ff0630) (0xc002294460) Stream removed, broadcasting: 1 I0409 21:17:51.209441 6 log.go:172] (0xc000ff0630) (0xc00296b040) Stream removed, broadcasting: 3 I0409 21:17:51.209454 6 log.go:172] (0xc000ff0630) (0xc00296b0e0) Stream removed, broadcasting: 5 Apr 9 21:17:51.209: INFO: Exec stderr: "" Apr 9 21:17:51.209: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-617 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:17:51.209: INFO: >>> kubeConfig: /root/.kube/config I0409 21:17:51.237048 6 log.go:172] (0xc001b04e70) (0xc00296b180) Create stream I0409 21:17:51.237095 6 log.go:172] (0xc001b04e70) (0xc00296b180) Stream added, broadcasting: 1 I0409 21:17:51.241013 6 log.go:172] (0xc001b04e70) Reply frame received for 1 I0409 21:17:51.241060 6 log.go:172] (0xc001b04e70) (0xc0022945a0) Create stream I0409 21:17:51.241076 6 log.go:172] (0xc001b04e70) (0xc0022945a0) Stream added, broadcasting: 3 I0409 21:17:51.242556 6 log.go:172] (0xc001b04e70) Reply frame received for 3 I0409 21:17:51.242594 6 log.go:172] (0xc001b04e70) (0xc0027728c0) Create stream I0409 21:17:51.242606 6 log.go:172] (0xc001b04e70) (0xc0027728c0) Stream added, broadcasting: 5 I0409 21:17:51.243918 6 log.go:172] (0xc001b04e70) Reply frame received for 5 I0409 21:17:51.295911 6 log.go:172] (0xc001b04e70) Data frame received for 3 I0409 21:17:51.295939 6 log.go:172] (0xc0022945a0) (3) Data frame handling I0409 21:17:51.295951 6 log.go:172] (0xc0022945a0) (3) Data frame sent I0409 21:17:51.295960 6 log.go:172] (0xc001b04e70) Data frame received for 3 I0409 21:17:51.295968 6 log.go:172] (0xc0022945a0) (3) Data frame handling I0409 21:17:51.296033 6 log.go:172] (0xc001b04e70) Data frame received for 5 I0409 21:17:51.296067 6 log.go:172] (0xc0027728c0) (5) Data frame handling I0409 21:17:51.297909 6 log.go:172] (0xc001b04e70) Data frame received for 1 I0409 21:17:51.297936 6 log.go:172] (0xc00296b180) (1) Data frame handling I0409 21:17:51.297985 6 log.go:172] (0xc00296b180) (1) Data frame sent I0409 21:17:51.298022 6 log.go:172] (0xc001b04e70) (0xc00296b180) Stream removed, broadcasting: 1 I0409 21:17:51.298109 6 log.go:172] (0xc001b04e70) (0xc00296b180) Stream removed, broadcasting: 1 I0409 21:17:51.298141 6 log.go:172] (0xc001b04e70) (0xc0022945a0) Stream removed, broadcasting: 3 I0409 21:17:51.298331 6 log.go:172] (0xc001b04e70) Go away received I0409 21:17:51.298437 6 log.go:172] (0xc001b04e70) (0xc0027728c0) Stream removed, broadcasting: 5 Apr 9 21:17:51.298: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:51.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-617" for this suite. • [SLOW TEST:11.188 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":414,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:51.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:17:51.341: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:55.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5249" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":429,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:55.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:17:55.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d" in namespace "projected-5734" to be "success or failure" Apr 9 21:17:55.545: INFO: Pod "downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.65476ms Apr 9 21:17:57.549: INFO: Pod "downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007071425s Apr 9 21:17:59.553: INFO: Pod "downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011519032s STEP: Saw pod success Apr 9 21:17:59.553: INFO: Pod "downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d" satisfied condition "success or failure" Apr 9 21:17:59.556: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d container client-container: STEP: delete the pod Apr 9 21:17:59.604: INFO: Waiting for pod downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d to disappear Apr 9 21:17:59.611: INFO: Pod downwardapi-volume-2d11d491-0e5d-4055-8581-2199da99cd5d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:17:59.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5734" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:17:59.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:18:00.074: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:18:02.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063880, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063880, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063880, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722063880, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:18:05.120: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:18:05.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7686" for this suite. STEP: Destroying namespace "webhook-7686-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.664 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":37,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:18:05.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 9 21:18:05.361: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:18:12.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3151" for this suite. • [SLOW TEST:7.556 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":38,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:18:12.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:18:12.914: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.689219ms) Apr 9 21:18:12.917: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.622373ms) Apr 9 21:18:12.921: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.875931ms) Apr 9 21:18:12.925: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.715969ms) Apr 9 21:18:12.929: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.270868ms) Apr 9 21:18:12.951: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 21.101243ms) Apr 9 21:18:12.955: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.123968ms) Apr 9 21:18:12.959: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.792641ms) Apr 9 21:18:12.962: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.458981ms) Apr 9 21:18:12.965: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.144724ms) Apr 9 21:18:12.969: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.663064ms) Apr 9 21:18:12.972: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.380544ms) Apr 9 21:18:12.976: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.462017ms) Apr 9 21:18:12.979: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.247131ms) Apr 9 21:18:12.983: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.432357ms) Apr 9 21:18:12.986: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.48641ms) Apr 9 21:18:12.990: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.612634ms) Apr 9 21:18:12.994: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.970388ms) Apr 9 21:18:12.998: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.97244ms) Apr 9 21:18:13.001: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.671949ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:18:13.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7075" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":39,"skipped":542,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:18:13.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0409 21:18:53.633350 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 21:18:53.633: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:18:53.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5899" for this suite. • [SLOW TEST:40.632 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":40,"skipped":550,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:18:53.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b62cd174-deae-42d4-b0c9-2fddf3669965 STEP: Creating a pod to test consume secrets Apr 9 21:18:53.751: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6" in namespace "projected-7979" to be "success or failure" Apr 9 21:18:53.762: INFO: Pod "pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.185934ms Apr 9 21:18:55.766: INFO: Pod "pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014871156s Apr 9 21:18:57.770: INFO: Pod "pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018908438s STEP: Saw pod success Apr 9 21:18:57.770: INFO: Pod "pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6" satisfied condition "success or failure" Apr 9 21:18:57.772: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6 container projected-secret-volume-test: STEP: delete the pod Apr 9 21:18:57.809: INFO: Waiting for pod pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6 to disappear Apr 9 21:18:57.815: INFO: Pod pod-projected-secrets-2eece0e6-67ef-4f2e-9d9f-e08d007bbde6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:18:57.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7979" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":551,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:18:57.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:19:02.315: INFO: Waiting up to 5m0s for pod "client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0" in namespace "pods-3451" to be "success or failure" Apr 9 21:19:02.364: INFO: Pod "client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.955888ms Apr 9 21:19:04.368: INFO: Pod "client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052825197s Apr 9 21:19:06.372: INFO: Pod "client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056596702s STEP: Saw pod success Apr 9 21:19:06.372: INFO: Pod "client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0" satisfied condition "success or failure" Apr 9 21:19:06.375: INFO: Trying to get logs from node jerma-worker pod client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0 container env3cont: STEP: delete the pod Apr 9 21:19:06.404: INFO: Waiting for pod client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0 to disappear Apr 9 21:19:06.421: INFO: Pod client-envvars-ef27ea6b-ab9c-453e-b1e1-034e6d5569d0 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:06.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3451" for this suite. • [SLOW TEST:8.607 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":558,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:06.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:19:06.520: INFO: Waiting up to 5m0s for pod "busybox-user-65534-80b6b270-3f96-4fda-8c72-4ca4bab7bcf1" in namespace "security-context-test-4816" to be "success or failure" Apr 9 21:19:06.523: INFO: Pod "busybox-user-65534-80b6b270-3f96-4fda-8c72-4ca4bab7bcf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.835729ms Apr 9 21:19:08.527: INFO: Pod "busybox-user-65534-80b6b270-3f96-4fda-8c72-4ca4bab7bcf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006615397s Apr 9 21:19:10.531: INFO: Pod "busybox-user-65534-80b6b270-3f96-4fda-8c72-4ca4bab7bcf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010776058s Apr 9 21:19:10.531: INFO: Pod "busybox-user-65534-80b6b270-3f96-4fda-8c72-4ca4bab7bcf1" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:10.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4816" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":559,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:10.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-497g STEP: Creating a pod to test atomic-volume-subpath Apr 9 21:19:10.627: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-497g" in namespace "subpath-5469" to be "success or failure" Apr 9 21:19:10.648: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Pending", Reason="", readiness=false. Elapsed: 21.106517ms Apr 9 21:19:12.730: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102842385s Apr 9 21:19:14.734: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 4.107213451s Apr 9 21:19:16.739: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 6.112405976s Apr 9 21:19:18.743: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 8.116218614s Apr 9 21:19:20.747: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 10.120565251s Apr 9 21:19:22.751: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 12.124236357s Apr 9 21:19:24.755: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 14.128625269s Apr 9 21:19:26.760: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 16.132843114s Apr 9 21:19:28.763: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 18.136613787s Apr 9 21:19:30.768: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 20.140781461s Apr 9 21:19:32.772: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Running", Reason="", readiness=true. Elapsed: 22.144849823s Apr 9 21:19:34.776: INFO: Pod "pod-subpath-test-configmap-497g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.148798641s STEP: Saw pod success Apr 9 21:19:34.776: INFO: Pod "pod-subpath-test-configmap-497g" satisfied condition "success or failure" Apr 9 21:19:34.779: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-497g container test-container-subpath-configmap-497g: STEP: delete the pod Apr 9 21:19:34.814: INFO: Waiting for pod pod-subpath-test-configmap-497g to disappear Apr 9 21:19:34.834: INFO: Pod pod-subpath-test-configmap-497g no longer exists STEP: Deleting pod pod-subpath-test-configmap-497g Apr 9 21:19:34.834: INFO: Deleting pod "pod-subpath-test-configmap-497g" in namespace "subpath-5469" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:34.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5469" for this suite. • [SLOW TEST:24.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":44,"skipped":565,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:34.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 9 21:19:38.041: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:38.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5416" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:38.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:19:38.222: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:39.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3398" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":46,"skipped":667,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:39.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:19:39.304: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655" in namespace "projected-4748" to be "success or failure" Apr 9 21:19:39.320: INFO: Pod "downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655": Phase="Pending", Reason="", readiness=false. Elapsed: 15.836588ms Apr 9 21:19:41.323: INFO: Pod "downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019199955s Apr 9 21:19:43.329: INFO: Pod "downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025397656s STEP: Saw pod success Apr 9 21:19:43.330: INFO: Pod "downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655" satisfied condition "success or failure" Apr 9 21:19:43.332: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655 container client-container: STEP: delete the pod Apr 9 21:19:43.396: INFO: Waiting for pod downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655 to disappear Apr 9 21:19:43.401: INFO: Pod downwardapi-volume-c0748d92-1792-42be-9825-3081f6f02655 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:43.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4748" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":673,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:43.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-eff680d4-ad46-4ff6-9a8a-b925c2b525cf STEP: Creating a pod to test consume configMaps Apr 9 21:19:43.518: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b" in namespace "projected-9866" to be "success or failure" Apr 9 21:19:43.532: INFO: Pod "pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.745088ms Apr 9 21:19:45.623: INFO: Pod "pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105231717s Apr 9 21:19:47.627: INFO: Pod "pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109558294s STEP: Saw pod success Apr 9 21:19:47.627: INFO: Pod "pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b" satisfied condition "success or failure" Apr 9 21:19:47.631: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:19:47.677: INFO: Waiting for pod pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b to disappear Apr 9 21:19:47.682: INFO: Pod pod-projected-configmaps-64b61150-8f73-4f4a-9e9d-63517033982b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:47.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9866" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:47.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:19:47.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb" in namespace "downward-api-2433" to be "success or failure" Apr 9 21:19:47.820: INFO: Pod "downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.955652ms Apr 9 21:19:49.880: INFO: Pod "downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069512547s Apr 9 21:19:51.884: INFO: Pod "downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073421243s STEP: Saw pod success Apr 9 21:19:51.884: INFO: Pod "downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb" satisfied condition "success or failure" Apr 9 21:19:51.887: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb container client-container: STEP: delete the pod Apr 9 21:19:51.908: INFO: Waiting for pod downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb to disappear Apr 9 21:19:51.913: INFO: Pod downwardapi-volume-a5f901dd-891f-4227-802e-0a0ebd663efb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:51.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2433" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:51.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 9 21:19:52.035: INFO: Waiting up to 5m0s for pod "pod-a1dc06dd-4faf-4597-8106-68232969154d" in namespace "emptydir-4844" to be "success or failure" Apr 9 21:19:52.039: INFO: Pod "pod-a1dc06dd-4faf-4597-8106-68232969154d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.932909ms Apr 9 21:19:54.078: INFO: Pod "pod-a1dc06dd-4faf-4597-8106-68232969154d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043083778s Apr 9 21:19:56.082: INFO: Pod "pod-a1dc06dd-4faf-4597-8106-68232969154d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047106051s STEP: Saw pod success Apr 9 21:19:56.082: INFO: Pod "pod-a1dc06dd-4faf-4597-8106-68232969154d" satisfied condition "success or failure" Apr 9 21:19:56.085: INFO: Trying to get logs from node jerma-worker pod pod-a1dc06dd-4faf-4597-8106-68232969154d container test-container: STEP: delete the pod Apr 9 21:19:56.122: INFO: Waiting for pod pod-a1dc06dd-4faf-4597-8106-68232969154d to disappear Apr 9 21:19:56.129: INFO: Pod pod-a1dc06dd-4faf-4597-8106-68232969154d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:19:56.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4844" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":736,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:19:56.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 9 21:20:04.320: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 21:20:04.324: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 21:20:06.324: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 21:20:06.348: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 21:20:08.324: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 21:20:08.329: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 21:20:10.324: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 21:20:10.327: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:20:10.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3760" for this suite. • [SLOW TEST:14.199 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":742,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:20:10.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1802 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 21:20:10.415: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 9 21:20:38.530: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.1.69&port=8080&tries=1'] Namespace:pod-network-test-1802 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:20:38.530: INFO: >>> kubeConfig: /root/.kube/config I0409 21:20:38.560210 6 log.go:172] (0xc000ff1080) (0xc001e88320) Create stream I0409 21:20:38.560243 6 log.go:172] (0xc000ff1080) (0xc001e88320) Stream added, broadcasting: 1 I0409 21:20:38.562103 6 log.go:172] (0xc000ff1080) Reply frame received for 1 I0409 21:20:38.562144 6 log.go:172] (0xc000ff1080) (0xc002700000) Create stream I0409 21:20:38.562154 6 log.go:172] (0xc000ff1080) (0xc002700000) Stream added, broadcasting: 3 I0409 21:20:38.562905 6 log.go:172] (0xc000ff1080) Reply frame received for 3 I0409 21:20:38.562933 6 log.go:172] (0xc000ff1080) (0xc0027000a0) Create stream I0409 21:20:38.562942 6 log.go:172] (0xc000ff1080) (0xc0027000a0) Stream added, broadcasting: 5 I0409 21:20:38.563658 6 log.go:172] (0xc000ff1080) Reply frame received for 5 I0409 21:20:38.647412 6 log.go:172] (0xc000ff1080) Data frame received for 3 I0409 21:20:38.647443 6 log.go:172] (0xc002700000) (3) Data frame handling I0409 21:20:38.647460 6 log.go:172] (0xc002700000) (3) Data frame sent I0409 21:20:38.647983 6 log.go:172] (0xc000ff1080) Data frame received for 3 I0409 21:20:38.648030 6 log.go:172] (0xc002700000) (3) Data frame handling I0409 21:20:38.648145 6 log.go:172] (0xc000ff1080) Data frame received for 5 I0409 21:20:38.648170 6 log.go:172] (0xc0027000a0) (5) Data frame handling I0409 21:20:38.649905 6 log.go:172] (0xc000ff1080) Data frame received for 1 I0409 21:20:38.649939 6 log.go:172] (0xc001e88320) (1) Data frame handling I0409 21:20:38.649951 6 log.go:172] (0xc001e88320) (1) Data frame sent I0409 21:20:38.649966 6 log.go:172] (0xc000ff1080) (0xc001e88320) Stream removed, broadcasting: 1 I0409 21:20:38.649990 6 log.go:172] (0xc000ff1080) Go away received I0409 21:20:38.650112 6 log.go:172] (0xc000ff1080) (0xc001e88320) Stream removed, broadcasting: 1 I0409 21:20:38.650144 6 log.go:172] (0xc000ff1080) (0xc002700000) Stream removed, broadcasting: 3 I0409 21:20:38.650164 6 log.go:172] (0xc000ff1080) (0xc0027000a0) Stream removed, broadcasting: 5 Apr 9 21:20:38.650: INFO: Waiting for responses: map[] Apr 9 21:20:38.653: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.2.158&port=8080&tries=1'] Namespace:pod-network-test-1802 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:20:38.653: INFO: >>> kubeConfig: /root/.kube/config I0409 21:20:38.703653 6 log.go:172] (0xc000ff1760) (0xc001e88640) Create stream I0409 21:20:38.703690 6 log.go:172] (0xc000ff1760) (0xc001e88640) Stream added, broadcasting: 1 I0409 21:20:38.706775 6 log.go:172] (0xc000ff1760) Reply frame received for 1 I0409 21:20:38.706830 6 log.go:172] (0xc000ff1760) (0xc0022955e0) Create stream I0409 21:20:38.706855 6 log.go:172] (0xc000ff1760) (0xc0022955e0) Stream added, broadcasting: 3 I0409 21:20:38.708031 6 log.go:172] (0xc000ff1760) Reply frame received for 3 I0409 21:20:38.708089 6 log.go:172] (0xc000ff1760) (0xc001e88780) Create stream I0409 21:20:38.708112 6 log.go:172] (0xc000ff1760) (0xc001e88780) Stream added, broadcasting: 5 I0409 21:20:38.709767 6 log.go:172] (0xc000ff1760) Reply frame received for 5 I0409 21:20:38.759178 6 log.go:172] (0xc000ff1760) Data frame received for 3 I0409 21:20:38.759208 6 log.go:172] (0xc0022955e0) (3) Data frame handling I0409 21:20:38.759236 6 log.go:172] (0xc0022955e0) (3) Data frame sent I0409 21:20:38.759430 6 log.go:172] (0xc000ff1760) Data frame received for 5 I0409 21:20:38.759459 6 log.go:172] (0xc001e88780) (5) Data frame handling I0409 21:20:38.759483 6 log.go:172] (0xc000ff1760) Data frame received for 3 I0409 21:20:38.759497 6 log.go:172] (0xc0022955e0) (3) Data frame handling I0409 21:20:38.760801 6 log.go:172] (0xc000ff1760) Data frame received for 1 I0409 21:20:38.760834 6 log.go:172] (0xc001e88640) (1) Data frame handling I0409 21:20:38.760875 6 log.go:172] (0xc001e88640) (1) Data frame sent I0409 21:20:38.760902 6 log.go:172] (0xc000ff1760) (0xc001e88640) Stream removed, broadcasting: 1 I0409 21:20:38.760988 6 log.go:172] (0xc000ff1760) Go away received I0409 21:20:38.761021 6 log.go:172] (0xc000ff1760) (0xc001e88640) Stream removed, broadcasting: 1 I0409 21:20:38.761050 6 log.go:172] (0xc000ff1760) (0xc0022955e0) Stream removed, broadcasting: 3 I0409 21:20:38.761066 6 log.go:172] (0xc000ff1760) (0xc001e88780) Stream removed, broadcasting: 5 Apr 9 21:20:38.761: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:20:38.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1802" for this suite. • [SLOW TEST:28.433 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":758,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:20:38.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 9 21:20:38.840: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:20:55.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6063" for this suite. • [SLOW TEST:16.858 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":53,"skipped":764,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:20:55.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:20:55.680: INFO: Creating ReplicaSet my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899 Apr 9 21:20:55.698: INFO: Pod name my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899: Found 0 pods out of 1 Apr 9 21:21:00.702: INFO: Pod name my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899: Found 1 pods out of 1 Apr 9 21:21:00.702: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899" is running Apr 9 21:21:00.704: INFO: Pod "my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899-hqj42" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 21:20:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 21:20:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 21:20:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 21:20:55 +0000 UTC Reason: Message:}]) Apr 9 21:21:00.705: INFO: Trying to dial the pod Apr 9 21:21:05.716: INFO: Controller my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899: Got expected result from replica 1 [my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899-hqj42]: "my-hostname-basic-43f7e1ba-7b5d-4069-b374-1366fa3d9899-hqj42", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:05.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7013" for this suite. • [SLOW TEST:10.096 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":54,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:05.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-9f8d4701-4034-485c-9136-ea1ea48fca26 STEP: Creating a pod to test consume configMaps Apr 9 21:21:05.797: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d" in namespace "projected-1337" to be "success or failure" Apr 9 21:21:05.800: INFO: Pod "pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275571ms Apr 9 21:21:07.833: INFO: Pod "pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035728041s Apr 9 21:21:09.837: INFO: Pod "pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040184535s STEP: Saw pod success Apr 9 21:21:09.837: INFO: Pod "pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d" satisfied condition "success or failure" Apr 9 21:21:09.841: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:21:09.872: INFO: Waiting for pod pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d to disappear Apr 9 21:21:09.899: INFO: Pod pod-projected-configmaps-2f04a2d5-1403-44ea-be5c-dc1c3c3a567d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:09.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1337" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:09.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 9 21:21:09.981: INFO: Waiting up to 5m0s for pod "pod-42a902f4-f398-45b5-aa09-00ce0926b348" in namespace "emptydir-4831" to be "success or failure" Apr 9 21:21:09.984: INFO: Pod "pod-42a902f4-f398-45b5-aa09-00ce0926b348": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504154ms Apr 9 21:21:11.988: INFO: Pod "pod-42a902f4-f398-45b5-aa09-00ce0926b348": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006788172s Apr 9 21:21:13.992: INFO: Pod "pod-42a902f4-f398-45b5-aa09-00ce0926b348": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010886761s STEP: Saw pod success Apr 9 21:21:13.992: INFO: Pod "pod-42a902f4-f398-45b5-aa09-00ce0926b348" satisfied condition "success or failure" Apr 9 21:21:13.995: INFO: Trying to get logs from node jerma-worker2 pod pod-42a902f4-f398-45b5-aa09-00ce0926b348 container test-container: STEP: delete the pod Apr 9 21:21:14.015: INFO: Waiting for pod pod-42a902f4-f398-45b5-aa09-00ce0926b348 to disappear Apr 9 21:21:14.020: INFO: Pod pod-42a902f4-f398-45b5-aa09-00ce0926b348 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:14.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4831" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":828,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:14.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 9 21:21:21.032: INFO: 0 pods remaining Apr 9 21:21:21.032: INFO: 0 pods has nil DeletionTimestamp Apr 9 21:21:21.032: INFO: Apr 9 21:21:21.673: INFO: 0 pods remaining Apr 9 21:21:21.673: INFO: 0 pods has nil DeletionTimestamp Apr 9 21:21:21.673: INFO: STEP: Gathering metrics W0409 21:21:22.520773 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 21:21:22.520: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:22.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6956" for this suite. • [SLOW TEST:8.507 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":57,"skipped":844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:22.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:21:23.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21" in namespace "downward-api-6335" to be "success or failure" Apr 9 21:21:23.139: INFO: Pod "downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21": Phase="Pending", Reason="", readiness=false. Elapsed: 133.1495ms Apr 9 21:21:25.142: INFO: Pod "downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137006096s Apr 9 21:21:27.147: INFO: Pod "downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141545272s STEP: Saw pod success Apr 9 21:21:27.147: INFO: Pod "downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21" satisfied condition "success or failure" Apr 9 21:21:27.150: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21 container client-container: STEP: delete the pod Apr 9 21:21:27.197: INFO: Waiting for pod downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21 to disappear Apr 9 21:21:27.234: INFO: Pod downwardapi-volume-0b60bca2-6a41-4cee-8773-43154c719b21 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:27.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6335" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":895,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:27.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:21:27.280: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 9 21:21:29.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6309 create -f -' Apr 9 21:21:32.619: INFO: stderr: "" Apr 9 21:21:32.619: INFO: stdout: "e2e-test-crd-publish-openapi-331-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 9 21:21:32.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6309 delete e2e-test-crd-publish-openapi-331-crds test-cr' Apr 9 21:21:32.734: INFO: stderr: "" Apr 9 21:21:32.734: INFO: stdout: "e2e-test-crd-publish-openapi-331-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 9 21:21:32.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6309 apply -f -' Apr 9 21:21:33.009: INFO: stderr: "" Apr 9 21:21:33.009: INFO: stdout: "e2e-test-crd-publish-openapi-331-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 9 21:21:33.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6309 delete e2e-test-crd-publish-openapi-331-crds test-cr' Apr 9 21:21:33.184: INFO: stderr: "" Apr 9 21:21:33.184: INFO: stdout: "e2e-test-crd-publish-openapi-331-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 9 21:21:33.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-331-crds' Apr 9 21:21:33.470: INFO: stderr: "" Apr 9 21:21:33.470: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-331-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:36.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6309" for this suite. • [SLOW TEST:9.137 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":59,"skipped":901,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:36.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0ce8b028-6fb9-4802-96f4-32157ef32daf STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0ce8b028-6fb9-4802-96f4-32157ef32daf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:42.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1887" for this suite. • [SLOW TEST:6.157 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:42.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 9 21:21:42.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9660' Apr 9 21:21:42.907: INFO: stderr: "" Apr 9 21:21:42.907: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 21:21:42.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9660' Apr 9 21:21:43.026: INFO: stderr: "" Apr 9 21:21:43.026: INFO: stdout: "update-demo-nautilus-6nsx2 update-demo-nautilus-wjhzs " Apr 9 21:21:43.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nsx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9660' Apr 9 21:21:43.128: INFO: stderr: "" Apr 9 21:21:43.128: INFO: stdout: "" Apr 9 21:21:43.128: INFO: update-demo-nautilus-6nsx2 is created but not running Apr 9 21:21:48.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9660' Apr 9 21:21:48.231: INFO: stderr: "" Apr 9 21:21:48.231: INFO: stdout: "update-demo-nautilus-6nsx2 update-demo-nautilus-wjhzs " Apr 9 21:21:48.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nsx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9660' Apr 9 21:21:48.324: INFO: stderr: "" Apr 9 21:21:48.324: INFO: stdout: "true" Apr 9 21:21:48.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nsx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9660' Apr 9 21:21:48.421: INFO: stderr: "" Apr 9 21:21:48.421: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:21:48.421: INFO: validating pod update-demo-nautilus-6nsx2 Apr 9 21:21:48.425: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:21:48.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:21:48.425: INFO: update-demo-nautilus-6nsx2 is verified up and running Apr 9 21:21:48.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjhzs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9660' Apr 9 21:21:48.519: INFO: stderr: "" Apr 9 21:21:48.519: INFO: stdout: "true" Apr 9 21:21:48.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjhzs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9660' Apr 9 21:21:48.603: INFO: stderr: "" Apr 9 21:21:48.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:21:48.603: INFO: validating pod update-demo-nautilus-wjhzs Apr 9 21:21:48.606: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:21:48.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:21:48.606: INFO: update-demo-nautilus-wjhzs is verified up and running STEP: using delete to clean up resources Apr 9 21:21:48.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9660' Apr 9 21:21:48.709: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:21:48.709: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 9 21:21:48.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9660' Apr 9 21:21:48.809: INFO: stderr: "No resources found in kubectl-9660 namespace.\n" Apr 9 21:21:48.809: INFO: stdout: "" Apr 9 21:21:48.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9660 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 21:21:48.918: INFO: stderr: "" Apr 9 21:21:48.918: INFO: stdout: "update-demo-nautilus-6nsx2\nupdate-demo-nautilus-wjhzs\n" Apr 9 21:21:49.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9660' Apr 9 21:21:49.508: INFO: stderr: "No resources found in kubectl-9660 namespace.\n" Apr 9 21:21:49.508: INFO: stdout: "" Apr 9 21:21:49.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9660 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 21:21:49.671: INFO: stderr: "" Apr 9 21:21:49.671: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9660" for this suite. • [SLOW TEST:7.146 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":61,"skipped":968,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:49.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 9 21:21:54.491: INFO: Successfully updated pod "labelsupdate85f28bcb-9ee6-4d89-be51-c910c3476690" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:21:58.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8917" for this suite. • [SLOW TEST:8.850 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":979,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:21:58.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 9 21:21:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:22:12.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2239" for this suite. • [SLOW TEST:14.369 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":63,"skipped":991,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:22:12.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 9 21:22:17.041: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 9 21:22:32.155: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:22:32.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-80" for this suite. • [SLOW TEST:19.265 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":64,"skipped":1008,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:22:32.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-dd14ac45-06c3-4b6c-8bb8-1ca6c140e721 STEP: Creating secret with name secret-projected-all-test-volume-1b90ce5a-acce-45ad-9031-a84c61637d22 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 9 21:22:32.249: INFO: Waiting up to 5m0s for pod "projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc" in namespace "projected-6102" to be "success or failure" Apr 9 21:22:32.252: INFO: Pod "projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940503ms Apr 9 21:22:34.256: INFO: Pod "projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006791627s Apr 9 21:22:36.260: INFO: Pod "projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011370749s STEP: Saw pod success Apr 9 21:22:36.260: INFO: Pod "projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc" satisfied condition "success or failure" Apr 9 21:22:36.264: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc container projected-all-volume-test: STEP: delete the pod Apr 9 21:22:36.328: INFO: Waiting for pod projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc to disappear Apr 9 21:22:36.354: INFO: Pod projected-volume-ac6656f2-aa02-47a2-80b6-8e02150499dc no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:22:36.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6102" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:22:36.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:22:36.478: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2115bbe9-7305-4037-b3cc-66e8c5e37aa5" in namespace "security-context-test-3365" to be "success or failure" Apr 9 21:22:36.486: INFO: Pod "busybox-readonly-false-2115bbe9-7305-4037-b3cc-66e8c5e37aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328465ms Apr 9 21:22:38.553: INFO: Pod "busybox-readonly-false-2115bbe9-7305-4037-b3cc-66e8c5e37aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075139373s Apr 9 21:22:40.557: INFO: Pod "busybox-readonly-false-2115bbe9-7305-4037-b3cc-66e8c5e37aa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079330797s Apr 9 21:22:40.557: INFO: Pod "busybox-readonly-false-2115bbe9-7305-4037-b3cc-66e8c5e37aa5" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:22:40.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3365" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1057,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:22:40.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4739 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 21:22:40.618: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 9 21:23:06.784: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostname&protocol=udp&host=10.244.1.81&port=8081&tries=1'] Namespace:pod-network-test-4739 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:23:06.784: INFO: >>> kubeConfig: /root/.kube/config I0409 21:23:06.822729 6 log.go:172] (0xc002142000) (0xc002340000) Create stream I0409 21:23:06.822774 6 log.go:172] (0xc002142000) (0xc002340000) Stream added, broadcasting: 1 I0409 21:23:06.824913 6 log.go:172] (0xc002142000) Reply frame received for 1 I0409 21:23:06.824967 6 log.go:172] (0xc002142000) (0xc00221c280) Create stream I0409 21:23:06.824985 6 log.go:172] (0xc002142000) (0xc00221c280) Stream added, broadcasting: 3 I0409 21:23:06.826123 6 log.go:172] (0xc002142000) Reply frame received for 3 I0409 21:23:06.826152 6 log.go:172] (0xc002142000) (0xc0023960a0) Create stream I0409 21:23:06.826158 6 log.go:172] (0xc002142000) (0xc0023960a0) Stream added, broadcasting: 5 I0409 21:23:06.827105 6 log.go:172] (0xc002142000) Reply frame received for 5 I0409 21:23:06.913705 6 log.go:172] (0xc002142000) Data frame received for 3 I0409 21:23:06.913757 6 log.go:172] (0xc00221c280) (3) Data frame handling I0409 21:23:06.913787 6 log.go:172] (0xc00221c280) (3) Data frame sent I0409 21:23:06.914422 6 log.go:172] (0xc002142000) Data frame received for 5 I0409 21:23:06.914460 6 log.go:172] (0xc0023960a0) (5) Data frame handling I0409 21:23:06.914600 6 log.go:172] (0xc002142000) Data frame received for 3 I0409 21:23:06.914632 6 log.go:172] (0xc00221c280) (3) Data frame handling I0409 21:23:06.916566 6 log.go:172] (0xc002142000) Data frame received for 1 I0409 21:23:06.916606 6 log.go:172] (0xc002340000) (1) Data frame handling I0409 21:23:06.916637 6 log.go:172] (0xc002340000) (1) Data frame sent I0409 21:23:06.916661 6 log.go:172] (0xc002142000) (0xc002340000) Stream removed, broadcasting: 1 I0409 21:23:06.916691 6 log.go:172] (0xc002142000) Go away received I0409 21:23:06.916781 6 log.go:172] (0xc002142000) (0xc002340000) Stream removed, broadcasting: 1 I0409 21:23:06.916815 6 log.go:172] (0xc002142000) (0xc00221c280) Stream removed, broadcasting: 3 I0409 21:23:06.916837 6 log.go:172] (0xc002142000) (0xc0023960a0) Stream removed, broadcasting: 5 Apr 9 21:23:06.916: INFO: Waiting for responses: map[] Apr 9 21:23:06.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostname&protocol=udp&host=10.244.2.170&port=8081&tries=1'] Namespace:pod-network-test-4739 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:23:06.920: INFO: >>> kubeConfig: /root/.kube/config I0409 21:23:06.957100 6 log.go:172] (0xc0025c2000) (0xc002396140) Create stream I0409 21:23:06.957317 6 log.go:172] (0xc0025c2000) (0xc002396140) Stream added, broadcasting: 1 I0409 21:23:06.963458 6 log.go:172] (0xc0025c2000) Reply frame received for 1 I0409 21:23:06.963539 6 log.go:172] (0xc0025c2000) (0xc00221c3c0) Create stream I0409 21:23:06.963567 6 log.go:172] (0xc0025c2000) (0xc00221c3c0) Stream added, broadcasting: 3 I0409 21:23:06.964683 6 log.go:172] (0xc0025c2000) Reply frame received for 3 I0409 21:23:06.964708 6 log.go:172] (0xc0025c2000) (0xc002340280) Create stream I0409 21:23:06.964717 6 log.go:172] (0xc0025c2000) (0xc002340280) Stream added, broadcasting: 5 I0409 21:23:06.965781 6 log.go:172] (0xc0025c2000) Reply frame received for 5 I0409 21:23:07.043740 6 log.go:172] (0xc0025c2000) Data frame received for 3 I0409 21:23:07.043775 6 log.go:172] (0xc00221c3c0) (3) Data frame handling I0409 21:23:07.043802 6 log.go:172] (0xc00221c3c0) (3) Data frame sent I0409 21:23:07.044345 6 log.go:172] (0xc0025c2000) Data frame received for 5 I0409 21:23:07.044367 6 log.go:172] (0xc002340280) (5) Data frame handling I0409 21:23:07.044399 6 log.go:172] (0xc0025c2000) Data frame received for 3 I0409 21:23:07.044425 6 log.go:172] (0xc00221c3c0) (3) Data frame handling I0409 21:23:07.046226 6 log.go:172] (0xc0025c2000) Data frame received for 1 I0409 21:23:07.046250 6 log.go:172] (0xc002396140) (1) Data frame handling I0409 21:23:07.046271 6 log.go:172] (0xc002396140) (1) Data frame sent I0409 21:23:07.046290 6 log.go:172] (0xc0025c2000) (0xc002396140) Stream removed, broadcasting: 1 I0409 21:23:07.046315 6 log.go:172] (0xc0025c2000) Go away received I0409 21:23:07.046398 6 log.go:172] (0xc0025c2000) (0xc002396140) Stream removed, broadcasting: 1 I0409 21:23:07.046425 6 log.go:172] (0xc0025c2000) (0xc00221c3c0) Stream removed, broadcasting: 3 I0409 21:23:07.046443 6 log.go:172] (0xc0025c2000) (0xc002340280) Stream removed, broadcasting: 5 Apr 9 21:23:07.046: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:07.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4739" for this suite. • [SLOW TEST:26.488 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1075,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:07.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 9 21:23:07.105: INFO: Waiting up to 5m0s for pod "var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4" in namespace "var-expansion-677" to be "success or failure" Apr 9 21:23:07.109: INFO: Pod "var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.69189ms Apr 9 21:23:09.113: INFO: Pod "var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007695688s Apr 9 21:23:11.117: INFO: Pod "var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011341647s STEP: Saw pod success Apr 9 21:23:11.117: INFO: Pod "var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4" satisfied condition "success or failure" Apr 9 21:23:11.120: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4 container dapi-container: STEP: delete the pod Apr 9 21:23:11.139: INFO: Waiting for pod var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4 to disappear Apr 9 21:23:11.164: INFO: Pod var-expansion-2b74b596-97fc-413d-8842-a5ea552dafc4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:11.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-677" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1092,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:11.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:23:11.240: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 9 21:23:11.247: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:11.263: INFO: Number of nodes with available pods: 0 Apr 9 21:23:11.263: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:12.270: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:12.290: INFO: Number of nodes with available pods: 0 Apr 9 21:23:12.290: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:13.551: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:13.599: INFO: Number of nodes with available pods: 0 Apr 9 21:23:13.599: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:14.269: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:14.273: INFO: Number of nodes with available pods: 0 Apr 9 21:23:14.273: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:15.269: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:15.272: INFO: Number of nodes with available pods: 0 Apr 9 21:23:15.272: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:16.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:16.287: INFO: Number of nodes with available pods: 2 Apr 9 21:23:16.288: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 9 21:23:16.343: INFO: Wrong image for pod: daemon-set-f8gws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:16.343: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:16.349: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:17.354: INFO: Wrong image for pod: daemon-set-f8gws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:17.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:17.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:18.355: INFO: Wrong image for pod: daemon-set-f8gws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:18.355: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:18.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:19.354: INFO: Wrong image for pod: daemon-set-f8gws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:19.354: INFO: Pod daemon-set-f8gws is not available Apr 9 21:23:19.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:19.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:20.353: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:20.353: INFO: Pod daemon-set-s2fsn is not available Apr 9 21:23:20.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:21.353: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:21.353: INFO: Pod daemon-set-s2fsn is not available Apr 9 21:23:21.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:22.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:22.354: INFO: Pod daemon-set-s2fsn is not available Apr 9 21:23:22.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:23.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:23.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:24.353: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:24.353: INFO: Pod daemon-set-ll4l4 is not available Apr 9 21:23:24.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:25.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:25.354: INFO: Pod daemon-set-ll4l4 is not available Apr 9 21:23:25.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:26.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:26.354: INFO: Pod daemon-set-ll4l4 is not available Apr 9 21:23:26.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:27.354: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:27.354: INFO: Pod daemon-set-ll4l4 is not available Apr 9 21:23:27.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:28.356: INFO: Wrong image for pod: daemon-set-ll4l4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 21:23:28.356: INFO: Pod daemon-set-ll4l4 is not available Apr 9 21:23:28.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:29.354: INFO: Pod daemon-set-jntjz is not available Apr 9 21:23:29.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 9 21:23:29.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:29.364: INFO: Number of nodes with available pods: 1 Apr 9 21:23:29.364: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:30.369: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:30.372: INFO: Number of nodes with available pods: 1 Apr 9 21:23:30.372: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:31.368: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:31.371: INFO: Number of nodes with available pods: 1 Apr 9 21:23:31.371: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:23:32.369: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:23:32.373: INFO: Number of nodes with available pods: 2 Apr 9 21:23:32.373: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6791, will wait for the garbage collector to delete the pods Apr 9 21:23:32.446: INFO: Deleting DaemonSet.extensions daemon-set took: 6.542435ms Apr 9 21:23:32.746: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.214736ms Apr 9 21:23:39.554: INFO: Number of nodes with available pods: 0 Apr 9 21:23:39.554: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 21:23:39.557: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6791/daemonsets","resourceVersion":"6771272"},"items":null} Apr 9 21:23:39.560: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6791/pods","resourceVersion":"6771272"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:39.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6791" for this suite. • [SLOW TEST:28.404 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":69,"skipped":1113,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:39.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 9 21:23:40.024: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 9 21:23:42.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064220, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064220, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064220, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064220, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:23:45.104: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:23:45.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:46.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3318" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.876 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":70,"skipped":1116,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:46.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:23:47.596: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:23:49.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064227, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064227, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064227, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064227, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:23:52.636: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:23:52.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-286-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:53.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6703" for this suite. STEP: Destroying namespace "webhook-6703-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":71,"skipped":1117,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:53.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:23:54.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 9 21:23:54.443: INFO: stderr: "" Apr 9 21:23:54.443: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:23:54.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6693" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":72,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:23:54.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 9 21:23:54.494: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 21:23:54.504: INFO: Waiting for terminating namespaces to be deleted... Apr 9 21:23:54.506: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 9 21:23:54.521: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.521: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 21:23:54.521: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.521: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 21:23:54.521: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 9 21:23:54.527: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.528: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 21:23:54.528: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.528: INFO: Container kube-hunter ready: false, restart count 0 Apr 9 21:23:54.528: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.528: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 21:23:54.528: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 9 21:23:54.528: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b5ef2d00-03a3-4424-b620-8da984f30b09 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-b5ef2d00-03a3-4424-b620-8da984f30b09 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b5ef2d00-03a3-4424-b620-8da984f30b09 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:10.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6604" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.313 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":73,"skipped":1147,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:10.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:24:11.628: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:24:13.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064251, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064251, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064251, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064251, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:24:16.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:16.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-860" for this suite. STEP: Destroying namespace "webhook-860-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.749 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":74,"skipped":1149,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:17.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-696061d0-f915-41cf-b9b2-6617261a9fe3 STEP: Creating a pod to test consume secrets Apr 9 21:24:17.639: INFO: Waiting up to 5m0s for pod "pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad" in namespace "secrets-9443" to be "success or failure" Apr 9 21:24:17.728: INFO: Pod "pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad": Phase="Pending", Reason="", readiness=false. Elapsed: 89.295905ms Apr 9 21:24:19.780: INFO: Pod "pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140464319s Apr 9 21:24:21.784: INFO: Pod "pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144589395s STEP: Saw pod success Apr 9 21:24:21.784: INFO: Pod "pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad" satisfied condition "success or failure" Apr 9 21:24:21.787: INFO: Trying to get logs from node jerma-worker pod pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad container secret-volume-test: STEP: delete the pod Apr 9 21:24:21.807: INFO: Waiting for pod pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad to disappear Apr 9 21:24:21.812: INFO: Pod pod-secrets-07f23994-d451-4892-b919-cb3ffd33fcad no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:21.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9443" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1168,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:21.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-80a34471-1470-4bf6-ae7f-32511d024a9a STEP: Creating a pod to test consume configMaps Apr 9 21:24:21.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee" in namespace "configmap-5659" to be "success or failure" Apr 9 21:24:21.940: INFO: Pod "pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee": Phase="Pending", Reason="", readiness=false. Elapsed: 17.676557ms Apr 9 21:24:23.944: INFO: Pod "pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02177778s Apr 9 21:24:25.948: INFO: Pod "pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02599674s STEP: Saw pod success Apr 9 21:24:25.948: INFO: Pod "pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee" satisfied condition "success or failure" Apr 9 21:24:25.951: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee container configmap-volume-test: STEP: delete the pod Apr 9 21:24:25.985: INFO: Waiting for pod pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee to disappear Apr 9 21:24:25.998: INFO: Pod pod-configmaps-fc281eef-525f-4568-aad3-0eddd8672eee no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:25.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5659" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:26.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 9 21:24:26.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:26.178: INFO: Number of nodes with available pods: 0 Apr 9 21:24:26.178: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:24:27.195: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:27.198: INFO: Number of nodes with available pods: 0 Apr 9 21:24:27.198: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:24:28.182: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:28.185: INFO: Number of nodes with available pods: 0 Apr 9 21:24:28.185: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:24:29.220: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:29.224: INFO: Number of nodes with available pods: 0 Apr 9 21:24:29.224: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:24:30.191: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:30.194: INFO: Number of nodes with available pods: 2 Apr 9 21:24:30.194: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 9 21:24:30.213: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:24:30.231: INFO: Number of nodes with available pods: 2 Apr 9 21:24:30.231: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1086, will wait for the garbage collector to delete the pods Apr 9 21:24:31.398: INFO: Deleting DaemonSet.extensions daemon-set took: 11.122081ms Apr 9 21:24:31.498: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.215507ms Apr 9 21:24:39.302: INFO: Number of nodes with available pods: 0 Apr 9 21:24:39.302: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 21:24:39.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1086/daemonsets","resourceVersion":"6771885"},"items":null} Apr 9 21:24:39.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1086/pods","resourceVersion":"6771885"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:39.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1086" for this suite. • [SLOW TEST:13.317 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":77,"skipped":1201,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:39.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-e5b90fa6-a3a4-4f46-9fb0-41febfb01085 STEP: Creating a pod to test consume secrets Apr 9 21:24:39.384: INFO: Waiting up to 5m0s for pod "pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b" in namespace "secrets-8291" to be "success or failure" Apr 9 21:24:39.388: INFO: Pod "pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.958449ms Apr 9 21:24:41.392: INFO: Pod "pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008487874s Apr 9 21:24:43.397: INFO: Pod "pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01280686s STEP: Saw pod success Apr 9 21:24:43.397: INFO: Pod "pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b" satisfied condition "success or failure" Apr 9 21:24:43.400: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b container secret-volume-test: STEP: delete the pod Apr 9 21:24:43.419: INFO: Waiting for pod pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b to disappear Apr 9 21:24:43.423: INFO: Pod pod-secrets-2fd209af-e225-4cdd-9f47-86f82eafe25b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:43.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8291" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1202,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 9 21:24:51.546: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 21:24:51.554: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 21:24:53.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 21:24:53.559: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 21:24:55.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 21:24:55.559: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 21:24:57.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 21:24:57.567: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 21:24:59.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 21:24:59.579: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:24:59.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5483" for this suite. • [SLOW TEST:16.164 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1219,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:24:59.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 9 21:24:59.636: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 9 21:25:00.220: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 9 21:25:02.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064300, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064300, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064300, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064300, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 21:25:04.959: INFO: Waited 639.236027ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:25:05.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1318" for this suite. • [SLOW TEST:5.991 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":80,"skipped":1219,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:25:05.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 9 21:25:05.712: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772111 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 9 21:25:05.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772112 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 9 21:25:05.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772113 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 9 21:25:15.768: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772163 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 9 21:25:15.768: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772164 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 9 21:25:15.769: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4936 /api/v1/namespaces/watch-4936/configmaps/e2e-watch-test-label-changed 6866f2d6-fbb4-42ac-8057-d673bcb2e1f9 6772165 0 2020-04-09 21:25:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:25:15.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4936" for this suite. • [SLOW TEST:10.191 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":81,"skipped":1227,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:25:15.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5152 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5152 STEP: creating replication controller externalsvc in namespace services-5152 I0409 21:25:15.928758 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5152, replica count: 2 I0409 21:25:18.979151 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 21:25:21.979337 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 9 21:25:22.037: INFO: Creating new exec pod Apr 9 21:25:26.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5152 execpodlbc9z -- /bin/sh -x -c nslookup nodeport-service' Apr 9 21:25:26.319: INFO: stderr: "I0409 21:25:26.199640 845 log.go:172] (0xc000972000) (0xc00096a000) Create stream\nI0409 21:25:26.199711 845 log.go:172] (0xc000972000) (0xc00096a000) Stream added, broadcasting: 1\nI0409 21:25:26.209585 845 log.go:172] (0xc000972000) Reply frame received for 1\nI0409 21:25:26.209631 845 log.go:172] (0xc000972000) (0xc0008c8000) Create stream\nI0409 21:25:26.209642 845 log.go:172] (0xc000972000) (0xc0008c8000) Stream added, broadcasting: 3\nI0409 21:25:26.211643 845 log.go:172] (0xc000972000) Reply frame received for 3\nI0409 21:25:26.211696 845 log.go:172] (0xc000972000) (0xc00096a0a0) Create stream\nI0409 21:25:26.211715 845 log.go:172] (0xc000972000) (0xc00096a0a0) Stream added, broadcasting: 5\nI0409 21:25:26.213026 845 log.go:172] (0xc000972000) Reply frame received for 5\nI0409 21:25:26.305496 845 log.go:172] (0xc000972000) Data frame received for 5\nI0409 21:25:26.305532 845 log.go:172] (0xc00096a0a0) (5) Data frame handling\nI0409 21:25:26.305555 845 log.go:172] (0xc00096a0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0409 21:25:26.312607 845 log.go:172] (0xc000972000) Data frame received for 3\nI0409 21:25:26.312624 845 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0409 21:25:26.312641 845 log.go:172] (0xc0008c8000) (3) Data frame sent\nI0409 21:25:26.313617 845 log.go:172] (0xc000972000) Data frame received for 3\nI0409 21:25:26.313641 845 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0409 21:25:26.313667 845 log.go:172] (0xc0008c8000) (3) Data frame sent\nI0409 21:25:26.314101 845 log.go:172] (0xc000972000) Data frame received for 3\nI0409 21:25:26.314124 845 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0409 21:25:26.314220 845 log.go:172] (0xc000972000) Data frame received for 5\nI0409 21:25:26.314238 845 log.go:172] (0xc00096a0a0) (5) Data frame handling\nI0409 21:25:26.316207 845 log.go:172] (0xc000972000) Data frame received for 1\nI0409 21:25:26.316233 845 log.go:172] (0xc00096a000) (1) Data frame handling\nI0409 21:25:26.316247 845 log.go:172] (0xc00096a000) (1) Data frame sent\nI0409 21:25:26.316274 845 log.go:172] (0xc000972000) (0xc00096a000) Stream removed, broadcasting: 1\nI0409 21:25:26.316296 845 log.go:172] (0xc000972000) Go away received\nI0409 21:25:26.316555 845 log.go:172] (0xc000972000) (0xc00096a000) Stream removed, broadcasting: 1\nI0409 21:25:26.316568 845 log.go:172] (0xc000972000) (0xc0008c8000) Stream removed, broadcasting: 3\nI0409 21:25:26.316573 845 log.go:172] (0xc000972000) (0xc00096a0a0) Stream removed, broadcasting: 5\n" Apr 9 21:25:26.320: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5152.svc.cluster.local\tcanonical name = externalsvc.services-5152.svc.cluster.local.\nName:\texternalsvc.services-5152.svc.cluster.local\nAddress: 10.107.30.18\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5152, will wait for the garbage collector to delete the pods Apr 9 21:25:26.379: INFO: Deleting ReplicationController externalsvc took: 6.80251ms Apr 9 21:25:26.680: INFO: Terminating ReplicationController externalsvc pods took: 300.257692ms Apr 9 21:25:39.516: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:25:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5152" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.783 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":82,"skipped":1241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:25:39.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:11.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9317" for this suite. • [SLOW TEST:31.470 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:11.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:26:11.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:26:13.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064371, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064371, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:26:16.495: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:16.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-650" for this suite. STEP: Destroying namespace "webhook-650-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.725 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":84,"skipped":1347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:16.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7924 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7924 I0409 21:26:16.882332 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7924, replica count: 2 I0409 21:26:19.932774 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 21:26:22.933025 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 21:26:22.933: INFO: Creating new exec pod Apr 9 21:26:27.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7924 execpodf2lpt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 9 21:26:28.152: INFO: stderr: "I0409 21:26:28.079055 865 log.go:172] (0xc000944000) (0xc00097e000) Create stream\nI0409 21:26:28.079129 865 log.go:172] (0xc000944000) (0xc00097e000) Stream added, broadcasting: 1\nI0409 21:26:28.082002 865 log.go:172] (0xc000944000) Reply frame received for 1\nI0409 21:26:28.082051 865 log.go:172] (0xc000944000) (0xc000962000) Create stream\nI0409 21:26:28.082065 865 log.go:172] (0xc000944000) (0xc000962000) Stream added, broadcasting: 3\nI0409 21:26:28.083167 865 log.go:172] (0xc000944000) Reply frame received for 3\nI0409 21:26:28.083201 865 log.go:172] (0xc000944000) (0xc00097e0a0) Create stream\nI0409 21:26:28.083212 865 log.go:172] (0xc000944000) (0xc00097e0a0) Stream added, broadcasting: 5\nI0409 21:26:28.083939 865 log.go:172] (0xc000944000) Reply frame received for 5\nI0409 21:26:28.145050 865 log.go:172] (0xc000944000) Data frame received for 3\nI0409 21:26:28.145099 865 log.go:172] (0xc000962000) (3) Data frame handling\nI0409 21:26:28.145299 865 log.go:172] (0xc000944000) Data frame received for 5\nI0409 21:26:28.145325 865 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0409 21:26:28.145345 865 log.go:172] (0xc00097e0a0) (5) Data frame sent\nI0409 21:26:28.145360 865 log.go:172] (0xc000944000) Data frame received for 5\nI0409 21:26:28.145372 865 log.go:172] (0xc00097e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0409 21:26:28.147314 865 log.go:172] (0xc000944000) Data frame received for 1\nI0409 21:26:28.147343 865 log.go:172] (0xc00097e000) (1) Data frame handling\nI0409 21:26:28.147363 865 log.go:172] (0xc00097e000) (1) Data frame sent\nI0409 21:26:28.147379 865 log.go:172] (0xc000944000) (0xc00097e000) Stream removed, broadcasting: 1\nI0409 21:26:28.147394 865 log.go:172] (0xc000944000) Go away received\nI0409 21:26:28.147793 865 log.go:172] (0xc000944000) (0xc00097e000) Stream removed, broadcasting: 1\nI0409 21:26:28.147824 865 log.go:172] (0xc000944000) (0xc000962000) Stream removed, broadcasting: 3\nI0409 21:26:28.147844 865 log.go:172] (0xc000944000) (0xc00097e0a0) Stream removed, broadcasting: 5\n" Apr 9 21:26:28.152: INFO: stdout: "" Apr 9 21:26:28.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7924 execpodf2lpt -- /bin/sh -x -c nc -zv -t -w 2 10.97.103.182 80' Apr 9 21:26:28.353: INFO: stderr: "I0409 21:26:28.280142 884 log.go:172] (0xc0001054a0) (0xc0009ac000) Create stream\nI0409 21:26:28.280203 884 log.go:172] (0xc0001054a0) (0xc0009ac000) Stream added, broadcasting: 1\nI0409 21:26:28.283335 884 log.go:172] (0xc0001054a0) Reply frame received for 1\nI0409 21:26:28.283377 884 log.go:172] (0xc0001054a0) (0xc00064fae0) Create stream\nI0409 21:26:28.283390 884 log.go:172] (0xc0001054a0) (0xc00064fae0) Stream added, broadcasting: 3\nI0409 21:26:28.284320 884 log.go:172] (0xc0001054a0) Reply frame received for 3\nI0409 21:26:28.284370 884 log.go:172] (0xc0001054a0) (0xc0009ac140) Create stream\nI0409 21:26:28.284384 884 log.go:172] (0xc0001054a0) (0xc0009ac140) Stream added, broadcasting: 5\nI0409 21:26:28.285547 884 log.go:172] (0xc0001054a0) Reply frame received for 5\nI0409 21:26:28.346923 884 log.go:172] (0xc0001054a0) Data frame received for 5\nI0409 21:26:28.346956 884 log.go:172] (0xc0009ac140) (5) Data frame handling\nI0409 21:26:28.346980 884 log.go:172] (0xc0009ac140) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.103.182 80\nConnection to 10.97.103.182 80 port [tcp/http] succeeded!\nI0409 21:26:28.347124 884 log.go:172] (0xc0001054a0) Data frame received for 3\nI0409 21:26:28.347171 884 log.go:172] (0xc00064fae0) (3) Data frame handling\nI0409 21:26:28.347295 884 log.go:172] (0xc0001054a0) Data frame received for 5\nI0409 21:26:28.347329 884 log.go:172] (0xc0009ac140) (5) Data frame handling\nI0409 21:26:28.348784 884 log.go:172] (0xc0001054a0) Data frame received for 1\nI0409 21:26:28.348808 884 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0409 21:26:28.348824 884 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0409 21:26:28.348839 884 log.go:172] (0xc0001054a0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0409 21:26:28.348938 884 log.go:172] (0xc0001054a0) Go away received\nI0409 21:26:28.349301 884 log.go:172] (0xc0001054a0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0409 21:26:28.349325 884 log.go:172] (0xc0001054a0) (0xc00064fae0) Stream removed, broadcasting: 3\nI0409 21:26:28.349334 884 log.go:172] (0xc0001054a0) (0xc0009ac140) Stream removed, broadcasting: 5\n" Apr 9 21:26:28.353: INFO: stdout: "" Apr 9 21:26:28.353: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:28.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7924" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.620 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":85,"skipped":1371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:28.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:26:28.443: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 9 21:26:30.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5947 create -f -' Apr 9 21:26:33.089: INFO: stderr: "" Apr 9 21:26:33.089: INFO: stdout: "e2e-test-crd-publish-openapi-947-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 9 21:26:33.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5947 delete e2e-test-crd-publish-openapi-947-crds test-cr' Apr 9 21:26:33.186: INFO: stderr: "" Apr 9 21:26:33.186: INFO: stdout: "e2e-test-crd-publish-openapi-947-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 9 21:26:33.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5947 apply -f -' Apr 9 21:26:33.456: INFO: stderr: "" Apr 9 21:26:33.456: INFO: stdout: "e2e-test-crd-publish-openapi-947-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 9 21:26:33.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5947 delete e2e-test-crd-publish-openapi-947-crds test-cr' Apr 9 21:26:33.576: INFO: stderr: "" Apr 9 21:26:33.576: INFO: stdout: "e2e-test-crd-publish-openapi-947-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 9 21:26:33.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-947-crds' Apr 9 21:26:33.828: INFO: stderr: "" Apr 9 21:26:33.828: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-947-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:36.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5947" for this suite. • [SLOW TEST:8.415 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":86,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:36.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 9 21:26:37.535: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 9 21:26:39.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064397, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064397, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064397, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064397, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:26:42.568: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:26:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:43.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5566" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.963 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":87,"skipped":1444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:43.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:26:43.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e" in namespace "projected-2882" to be "success or failure" Apr 9 21:26:43.877: INFO: Pod "downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158672ms Apr 9 21:26:45.881: INFO: Pod "downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006853529s Apr 9 21:26:47.922: INFO: Pod "downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04770271s STEP: Saw pod success Apr 9 21:26:47.922: INFO: Pod "downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e" satisfied condition "success or failure" Apr 9 21:26:47.925: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e container client-container: STEP: delete the pod Apr 9 21:26:47.962: INFO: Waiting for pod downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e to disappear Apr 9 21:26:47.967: INFO: Pod downwardapi-volume-fd23acd3-7f01-48c5-944a-f89ad176c06e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:47.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2882" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1487,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:47.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 9 21:26:48.016: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix635338027/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:48.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8912" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":89,"skipped":1491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:48.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 21:26:48.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8490' Apr 9 21:26:48.331: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 9 21:26:48.331: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 9 21:26:50.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8490' Apr 9 21:26:50.492: INFO: stderr: "" Apr 9 21:26:50.492: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:50.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8490" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":90,"skipped":1528,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:26:50.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:26:51.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:26:53.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 21:26:55.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722064411, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:26:58.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:26:58.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:26:59.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-626" for this suite. STEP: Destroying namespace "webhook-626-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.450 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":91,"skipped":1548,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:00.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:27:00.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06" in namespace "downward-api-2010" to be "success or failure" Apr 9 21:27:00.140: INFO: Pod "downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06": Phase="Pending", Reason="", readiness=false. Elapsed: 20.533531ms Apr 9 21:27:02.145: INFO: Pod "downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025472616s Apr 9 21:27:04.149: INFO: Pod "downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030073435s STEP: Saw pod success Apr 9 21:27:04.149: INFO: Pod "downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06" satisfied condition "success or failure" Apr 9 21:27:04.153: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06 container client-container: STEP: delete the pod Apr 9 21:27:04.217: INFO: Waiting for pod downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06 to disappear Apr 9 21:27:04.225: INFO: Pod downwardapi-volume-f4ff64a8-f9e9-4e0b-bb84-15d00e3d0f06 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:04.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2010" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1556,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:04.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-468a7b70-0b60-43a1-8bba-6b4ac50394de STEP: Creating a pod to test consume secrets Apr 9 21:27:04.293: INFO: Waiting up to 5m0s for pod "pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699" in namespace "secrets-2934" to be "success or failure" Apr 9 21:27:04.297: INFO: Pod "pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835189ms Apr 9 21:27:06.300: INFO: Pod "pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007314919s Apr 9 21:27:08.305: INFO: Pod "pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011790915s STEP: Saw pod success Apr 9 21:27:08.305: INFO: Pod "pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699" satisfied condition "success or failure" Apr 9 21:27:08.308: INFO: Trying to get logs from node jerma-worker pod pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699 container secret-volume-test: STEP: delete the pod Apr 9 21:27:08.340: INFO: Waiting for pod pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699 to disappear Apr 9 21:27:08.345: INFO: Pod pod-secrets-dec4b3b1-d7eb-4e3a-8947-6bdd2d226699 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:08.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2934" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:08.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 9 21:27:08.420: INFO: Waiting up to 5m0s for pod "client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f" in namespace "containers-5901" to be "success or failure" Apr 9 21:27:08.441: INFO: Pod "client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.824409ms Apr 9 21:27:10.445: INFO: Pod "client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024818774s Apr 9 21:27:12.450: INFO: Pod "client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029393075s STEP: Saw pod success Apr 9 21:27:12.450: INFO: Pod "client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f" satisfied condition "success or failure" Apr 9 21:27:12.453: INFO: Trying to get logs from node jerma-worker pod client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f container test-container: STEP: delete the pod Apr 9 21:27:12.487: INFO: Waiting for pod client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f to disappear Apr 9 21:27:12.495: INFO: Pod client-containers-1eb8589d-47b9-425e-927a-dfec8c9c162f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:12.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5901" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:12.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5069" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":95,"skipped":1608,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:12.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8521/secret-test-ec27dd4b-edac-4fad-b041-b477f1da077f STEP: Creating a pod to test consume secrets Apr 9 21:27:12.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f" in namespace "secrets-8521" to be "success or failure" Apr 9 21:27:12.772: INFO: Pod "pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.94072ms Apr 9 21:27:14.778: INFO: Pod "pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028998491s Apr 9 21:27:16.782: INFO: Pod "pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032879162s STEP: Saw pod success Apr 9 21:27:16.782: INFO: Pod "pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f" satisfied condition "success or failure" Apr 9 21:27:16.794: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f container env-test: STEP: delete the pod Apr 9 21:27:16.807: INFO: Waiting for pod pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f to disappear Apr 9 21:27:16.832: INFO: Pod pod-configmaps-cec8d1b1-fff3-4f72-bc4b-1461e21d439f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:16.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8521" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1620,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:16.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 9 21:27:21.479: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1555d2ca-cb64-4699-a4d1-3c6b1588657f" Apr 9 21:27:21.479: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1555d2ca-cb64-4699-a4d1-3c6b1588657f" in namespace "pods-2074" to be "terminated due to deadline exceeded" Apr 9 21:27:21.521: INFO: Pod "pod-update-activedeadlineseconds-1555d2ca-cb64-4699-a4d1-3c6b1588657f": Phase="Running", Reason="", readiness=true. Elapsed: 41.737298ms Apr 9 21:27:23.524: INFO: Pod "pod-update-activedeadlineseconds-1555d2ca-cb64-4699-a4d1-3c6b1588657f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.045480423s Apr 9 21:27:23.524: INFO: Pod "pod-update-activedeadlineseconds-1555d2ca-cb64-4699-a4d1-3c6b1588657f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:23.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2074" for this suite. • [SLOW TEST:6.694 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1626,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:23.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 9 21:27:23.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 9 21:27:23.787: INFO: stderr: "" Apr 9 21:27:23.787: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:23.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5127" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":98,"skipped":1642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:23.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:34.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7013" for this suite. • [SLOW TEST:11.177 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":99,"skipped":1667,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:34.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7d11ea00-81d2-4fcf-857b-131836be6517 STEP: Creating a pod to test consume configMaps Apr 9 21:27:35.073: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4" in namespace "projected-2632" to be "success or failure" Apr 9 21:27:35.094: INFO: Pod "pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.081711ms Apr 9 21:27:37.186: INFO: Pod "pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112853727s Apr 9 21:27:39.190: INFO: Pod "pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116963852s STEP: Saw pod success Apr 9 21:27:39.191: INFO: Pod "pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4" satisfied condition "success or failure" Apr 9 21:27:39.193: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4 container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:27:39.253: INFO: Waiting for pod pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4 to disappear Apr 9 21:27:39.268: INFO: Pod pod-projected-configmaps-27869b5e-3b75-4bb1-b8e0-208a0026b3c4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:27:39.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2632" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1679,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:27:39.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:28:39.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7037" for this suite. • [SLOW TEST:60.067 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1694,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:28:39.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 9 21:28:39.397: INFO: namespace kubectl-7644 Apr 9 21:28:39.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7644' Apr 9 21:28:39.697: INFO: stderr: "" Apr 9 21:28:39.697: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 9 21:28:40.700: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:28:40.700: INFO: Found 0 / 1 Apr 9 21:28:41.701: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:28:41.701: INFO: Found 0 / 1 Apr 9 21:28:42.702: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:28:42.702: INFO: Found 1 / 1 Apr 9 21:28:42.702: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 9 21:28:42.706: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 21:28:42.706: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 9 21:28:42.706: INFO: wait on agnhost-master startup in kubectl-7644 Apr 9 21:28:42.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-nkln6 agnhost-master --namespace=kubectl-7644' Apr 9 21:28:42.809: INFO: stderr: "" Apr 9 21:28:42.810: INFO: stdout: "Paused\n" STEP: exposing RC Apr 9 21:28:42.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7644' Apr 9 21:28:42.942: INFO: stderr: "" Apr 9 21:28:42.942: INFO: stdout: "service/rm2 exposed\n" Apr 9 21:28:42.952: INFO: Service rm2 in namespace kubectl-7644 found. STEP: exposing service Apr 9 21:28:44.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7644' Apr 9 21:28:45.196: INFO: stderr: "" Apr 9 21:28:45.196: INFO: stdout: "service/rm3 exposed\n" Apr 9 21:28:45.210: INFO: Service rm3 in namespace kubectl-7644 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:28:47.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7644" for this suite. • [SLOW TEST:7.884 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":102,"skipped":1695,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:28:47.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6773.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6773.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6773.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 21:28:53.380: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.383: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.387: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.390: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.399: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.401: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.404: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.407: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:53.413: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:28:58.418: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.421: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.425: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.428: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.438: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.441: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.444: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.448: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:28:58.456: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:29:03.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.421: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.424: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.428: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.437: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.439: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.442: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.445: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:03.451: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:29:08.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.420: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.422: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.425: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.433: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.436: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.440: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.443: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:08.450: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:29:13.418: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.422: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.425: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.429: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.439: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.442: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.445: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.448: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:13.454: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:29:18.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.421: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.424: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.427: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.435: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.437: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.440: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.442: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local from pod dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d: the server could not find the requested resource (get pods dns-test-f032983a-5e8d-45de-9b45-c547bb98938d) Apr 9 21:29:18.447: INFO: Lookups using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6773.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6773.svc.cluster.local jessie_udp@dns-test-service-2.dns-6773.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6773.svc.cluster.local] Apr 9 21:29:23.449: INFO: DNS probes using dns-6773/dns-test-f032983a-5e8d-45de-9b45-c547bb98938d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:29:23.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6773" for this suite. • [SLOW TEST:36.581 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":103,"skipped":1704,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:29:23.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 9 21:29:24.065: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:29:29.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2019" for this suite. • [SLOW TEST:5.635 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":104,"skipped":1708,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:29:29.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:29:45.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3692" for this suite. • [SLOW TEST:16.285 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":105,"skipped":1718,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:29:45.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7258 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 21:29:45.766: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 9 21:30:11.936: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.108 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7258 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:30:11.936: INFO: >>> kubeConfig: /root/.kube/config I0409 21:30:11.967021 6 log.go:172] (0xc00459a8f0) (0xc0019a12c0) Create stream I0409 21:30:11.967058 6 log.go:172] (0xc00459a8f0) (0xc0019a12c0) Stream added, broadcasting: 1 I0409 21:30:11.968860 6 log.go:172] (0xc00459a8f0) Reply frame received for 1 I0409 21:30:11.968905 6 log.go:172] (0xc00459a8f0) (0xc001e88960) Create stream I0409 21:30:11.968921 6 log.go:172] (0xc00459a8f0) (0xc001e88960) Stream added, broadcasting: 3 I0409 21:30:11.970271 6 log.go:172] (0xc00459a8f0) Reply frame received for 3 I0409 21:30:11.970327 6 log.go:172] (0xc00459a8f0) (0xc002294460) Create stream I0409 21:30:11.970344 6 log.go:172] (0xc00459a8f0) (0xc002294460) Stream added, broadcasting: 5 I0409 21:30:11.971314 6 log.go:172] (0xc00459a8f0) Reply frame received for 5 I0409 21:30:13.054050 6 log.go:172] (0xc00459a8f0) Data frame received for 5 I0409 21:30:13.054122 6 log.go:172] (0xc00459a8f0) Data frame received for 3 I0409 21:30:13.054244 6 log.go:172] (0xc001e88960) (3) Data frame handling I0409 21:30:13.054312 6 log.go:172] (0xc001e88960) (3) Data frame sent I0409 21:30:13.054355 6 log.go:172] (0xc00459a8f0) Data frame received for 3 I0409 21:30:13.054371 6 log.go:172] (0xc001e88960) (3) Data frame handling I0409 21:30:13.054409 6 log.go:172] (0xc002294460) (5) Data frame handling I0409 21:30:13.056304 6 log.go:172] (0xc00459a8f0) Data frame received for 1 I0409 21:30:13.056334 6 log.go:172] (0xc0019a12c0) (1) Data frame handling I0409 21:30:13.056366 6 log.go:172] (0xc0019a12c0) (1) Data frame sent I0409 21:30:13.056397 6 log.go:172] (0xc00459a8f0) (0xc0019a12c0) Stream removed, broadcasting: 1 I0409 21:30:13.056433 6 log.go:172] (0xc00459a8f0) Go away received I0409 21:30:13.056773 6 log.go:172] (0xc00459a8f0) (0xc0019a12c0) Stream removed, broadcasting: 1 I0409 21:30:13.056799 6 log.go:172] (0xc00459a8f0) (0xc001e88960) Stream removed, broadcasting: 3 I0409 21:30:13.056819 6 log.go:172] (0xc00459a8f0) (0xc002294460) Stream removed, broadcasting: 5 Apr 9 21:30:13.056: INFO: Found all expected endpoints: [netserver-0] Apr 9 21:30:13.060: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.195 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7258 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:30:13.060: INFO: >>> kubeConfig: /root/.kube/config I0409 21:30:13.090723 6 log.go:172] (0xc001d0a4d0) (0xc001e890e0) Create stream I0409 21:30:13.090751 6 log.go:172] (0xc001d0a4d0) (0xc001e890e0) Stream added, broadcasting: 1 I0409 21:30:13.093078 6 log.go:172] (0xc001d0a4d0) Reply frame received for 1 I0409 21:30:13.093255 6 log.go:172] (0xc001d0a4d0) (0xc001e89180) Create stream I0409 21:30:13.093274 6 log.go:172] (0xc001d0a4d0) (0xc001e89180) Stream added, broadcasting: 3 I0409 21:30:13.094505 6 log.go:172] (0xc001d0a4d0) Reply frame received for 3 I0409 21:30:13.094547 6 log.go:172] (0xc001d0a4d0) (0xc001e89220) Create stream I0409 21:30:13.094563 6 log.go:172] (0xc001d0a4d0) (0xc001e89220) Stream added, broadcasting: 5 I0409 21:30:13.095465 6 log.go:172] (0xc001d0a4d0) Reply frame received for 5 I0409 21:30:14.194588 6 log.go:172] (0xc001d0a4d0) Data frame received for 5 I0409 21:30:14.194627 6 log.go:172] (0xc001e89220) (5) Data frame handling I0409 21:30:14.194696 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 21:30:14.194788 6 log.go:172] (0xc001e89180) (3) Data frame handling I0409 21:30:14.194830 6 log.go:172] (0xc001e89180) (3) Data frame sent I0409 21:30:14.194855 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 21:30:14.194877 6 log.go:172] (0xc001e89180) (3) Data frame handling I0409 21:30:14.196530 6 log.go:172] (0xc001d0a4d0) Data frame received for 1 I0409 21:30:14.196557 6 log.go:172] (0xc001e890e0) (1) Data frame handling I0409 21:30:14.196582 6 log.go:172] (0xc001e890e0) (1) Data frame sent I0409 21:30:14.196607 6 log.go:172] (0xc001d0a4d0) (0xc001e890e0) Stream removed, broadcasting: 1 I0409 21:30:14.196709 6 log.go:172] (0xc001d0a4d0) Go away received I0409 21:30:14.196737 6 log.go:172] (0xc001d0a4d0) (0xc001e890e0) Stream removed, broadcasting: 1 I0409 21:30:14.196763 6 log.go:172] (0xc001d0a4d0) (0xc001e89180) Stream removed, broadcasting: 3 I0409 21:30:14.196784 6 log.go:172] (0xc001d0a4d0) (0xc001e89220) Stream removed, broadcasting: 5 Apr 9 21:30:14.196: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:14.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7258" for this suite. • [SLOW TEST:28.479 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:14.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 9 21:30:14.280: INFO: Waiting up to 5m0s for pod "pod-a8382c26-897e-4990-be2d-bfd67d14f030" in namespace "emptydir-898" to be "success or failure" Apr 9 21:30:14.283: INFO: Pod "pod-a8382c26-897e-4990-be2d-bfd67d14f030": Phase="Pending", Reason="", readiness=false. Elapsed: 3.185589ms Apr 9 21:30:16.287: INFO: Pod "pod-a8382c26-897e-4990-be2d-bfd67d14f030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00728999s Apr 9 21:30:18.291: INFO: Pod "pod-a8382c26-897e-4990-be2d-bfd67d14f030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011475547s STEP: Saw pod success Apr 9 21:30:18.291: INFO: Pod "pod-a8382c26-897e-4990-be2d-bfd67d14f030" satisfied condition "success or failure" Apr 9 21:30:18.295: INFO: Trying to get logs from node jerma-worker2 pod pod-a8382c26-897e-4990-be2d-bfd67d14f030 container test-container: STEP: delete the pod Apr 9 21:30:18.327: INFO: Waiting for pod pod-a8382c26-897e-4990-be2d-bfd67d14f030 to disappear Apr 9 21:30:18.331: INFO: Pod pod-a8382c26-897e-4990-be2d-bfd67d14f030 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:18.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-898" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1754,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:18.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1f3f32c4-2bf1-4827-a993-273feb01ea38 STEP: Creating a pod to test consume secrets Apr 9 21:30:18.411: INFO: Waiting up to 5m0s for pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07" in namespace "secrets-6137" to be "success or failure" Apr 9 21:30:18.428: INFO: Pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07": Phase="Pending", Reason="", readiness=false. Elapsed: 16.351973ms Apr 9 21:30:20.464: INFO: Pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052518612s Apr 9 21:30:22.468: INFO: Pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07": Phase="Running", Reason="", readiness=true. Elapsed: 4.056703693s Apr 9 21:30:24.472: INFO: Pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060944634s STEP: Saw pod success Apr 9 21:30:24.472: INFO: Pod "pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07" satisfied condition "success or failure" Apr 9 21:30:24.475: INFO: Trying to get logs from node jerma-worker pod pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07 container secret-volume-test: STEP: delete the pod Apr 9 21:30:24.523: INFO: Waiting for pod pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07 to disappear Apr 9 21:30:24.529: INFO: Pod pod-secrets-354ea53c-dc4c-4b21-8b5c-244c770b8d07 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:24.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6137" for this suite. • [SLOW TEST:6.194 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1772,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:24.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 9 21:30:29.132: INFO: Successfully updated pod "annotationupdate50009ba4-6c8c-4655-8aef-551ed12c2854" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:31.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6187" for this suite. • [SLOW TEST:6.634 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1775,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:31.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 9 21:30:31.206: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:31.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7419" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":110,"skipped":1786,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:31.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 9 21:30:35.886: INFO: Successfully updated pod "labelsupdatec0aff5b4-eebf-4884-89f3-ce4ef60d4fdb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:37.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5339" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1788,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:37.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:30:38.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679" in namespace "downward-api-5836" to be "success or failure" Apr 9 21:30:38.050: INFO: Pod "downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679": Phase="Pending", Reason="", readiness=false. Elapsed: 15.757618ms Apr 9 21:30:40.055: INFO: Pod "downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020269838s Apr 9 21:30:42.058: INFO: Pod "downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023799318s STEP: Saw pod success Apr 9 21:30:42.059: INFO: Pod "downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679" satisfied condition "success or failure" Apr 9 21:30:42.061: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679 container client-container: STEP: delete the pod Apr 9 21:30:42.091: INFO: Waiting for pod downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679 to disappear Apr 9 21:30:42.129: INFO: Pod downwardapi-volume-af578ff6-cd2b-4e73-b6e9-777fc9db7679 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5836" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1802,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:42.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 9 21:30:42.240: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 9 21:30:42.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:42.497: INFO: stderr: "" Apr 9 21:30:42.497: INFO: stdout: "service/agnhost-slave created\n" Apr 9 21:30:42.498: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 9 21:30:42.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:42.742: INFO: stderr: "" Apr 9 21:30:42.742: INFO: stdout: "service/agnhost-master created\n" Apr 9 21:30:42.742: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 9 21:30:42.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:42.989: INFO: stderr: "" Apr 9 21:30:42.989: INFO: stdout: "service/frontend created\n" Apr 9 21:30:42.989: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 9 21:30:42.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:43.229: INFO: stderr: "" Apr 9 21:30:43.229: INFO: stdout: "deployment.apps/frontend created\n" Apr 9 21:30:43.229: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 9 21:30:43.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:43.716: INFO: stderr: "" Apr 9 21:30:43.716: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 9 21:30:43.717: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 9 21:30:43.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8812' Apr 9 21:30:44.406: INFO: stderr: "" Apr 9 21:30:44.406: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 9 21:30:44.406: INFO: Waiting for all frontend pods to be Running. Apr 9 21:30:54.457: INFO: Waiting for frontend to serve content. Apr 9 21:30:54.467: INFO: Trying to add a new entry to the guestbook. Apr 9 21:30:54.477: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 9 21:30:54.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:54.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:54.605: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 9 21:30:54.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:54.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:54.785: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 9 21:30:54.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:54.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:54.942: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 9 21:30:54.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:55.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:55.057: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 9 21:30:55.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:55.176: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:55.176: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 9 21:30:55.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8812' Apr 9 21:30:55.296: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:30:55.296: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:30:55.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8812" for this suite. • [SLOW TEST:13.151 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":113,"skipped":1817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:30:55.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:31:13.826: INFO: Container started at 2020-04-09 21:30:58 +0000 UTC, pod became ready at 2020-04-09 21:31:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4863" for this suite. • [SLOW TEST:18.524 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1876,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:13.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 9 21:31:13.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8760 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 9 21:31:14.014: INFO: stderr: "" Apr 9 21:31:14.014: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 9 21:31:14.014: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 9 21:31:14.014: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8760" to be "running and ready, or succeeded" Apr 9 21:31:14.032: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.340924ms Apr 9 21:31:16.037: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02251063s Apr 9 21:31:18.044: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.029803552s Apr 9 21:31:18.044: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 9 21:31:18.044: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 9 21:31:18.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760' Apr 9 21:31:18.148: INFO: stderr: "" Apr 9 21:31:18.148: INFO: stdout: "I0409 21:31:16.216007 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/rmx 243\nI0409 21:31:16.416213 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/2kn 237\nI0409 21:31:16.616184 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/xjgx 442\nI0409 21:31:16.816179 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/l5sh 288\nI0409 21:31:17.016192 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/4krl 412\nI0409 21:31:17.216174 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ctsq 574\nI0409 21:31:17.416164 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/gfc 347\nI0409 21:31:17.616233 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/wcz 278\nI0409 21:31:17.816163 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vz6 480\nI0409 21:31:18.016236 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/s2r 502\n" STEP: limiting log lines Apr 9 21:31:18.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760 --tail=1' Apr 9 21:31:18.254: INFO: stderr: "" Apr 9 21:31:18.254: INFO: stdout: "I0409 21:31:18.216211 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/bfm 314\n" Apr 9 21:31:18.254: INFO: got output "I0409 21:31:18.216211 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/bfm 314\n" STEP: limiting log bytes Apr 9 21:31:18.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760 --limit-bytes=1' Apr 9 21:31:18.375: INFO: stderr: "" Apr 9 21:31:18.375: INFO: stdout: "I" Apr 9 21:31:18.375: INFO: got output "I" STEP: exposing timestamps Apr 9 21:31:18.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760 --tail=1 --timestamps' Apr 9 21:31:18.491: INFO: stderr: "" Apr 9 21:31:18.491: INFO: stdout: "2020-04-09T21:31:18.416411867Z I0409 21:31:18.416211 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/27d 314\n" Apr 9 21:31:18.491: INFO: got output "2020-04-09T21:31:18.416411867Z I0409 21:31:18.416211 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/27d 314\n" STEP: restricting to a time range Apr 9 21:31:20.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760 --since=1s' Apr 9 21:31:21.125: INFO: stderr: "" Apr 9 21:31:21.125: INFO: stdout: "I0409 21:31:20.216193 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/tfqx 330\nI0409 21:31:20.416191 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/2t2w 329\nI0409 21:31:20.616221 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/6t4 262\nI0409 21:31:20.816202 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/nwm 442\nI0409 21:31:21.016220 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/4g78 516\n" Apr 9 21:31:21.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8760 --since=24h' Apr 9 21:31:21.237: INFO: stderr: "" Apr 9 21:31:21.237: INFO: stdout: "I0409 21:31:16.216007 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/rmx 243\nI0409 21:31:16.416213 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/2kn 237\nI0409 21:31:16.616184 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/xjgx 442\nI0409 21:31:16.816179 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/l5sh 288\nI0409 21:31:17.016192 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/4krl 412\nI0409 21:31:17.216174 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ctsq 574\nI0409 21:31:17.416164 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/gfc 347\nI0409 21:31:17.616233 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/wcz 278\nI0409 21:31:17.816163 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vz6 480\nI0409 21:31:18.016236 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/s2r 502\nI0409 21:31:18.216211 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/bfm 314\nI0409 21:31:18.416211 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/27d 314\nI0409 21:31:18.616237 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/bbx 330\nI0409 21:31:18.816242 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/62kz 424\nI0409 21:31:19.016187 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/clk 484\nI0409 21:31:19.216187 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/cqr 298\nI0409 21:31:19.416204 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/2dbt 415\nI0409 21:31:19.616193 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/wrwk 249\nI0409 21:31:19.816216 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/lsw 218\nI0409 21:31:20.016209 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/sbf4 450\nI0409 21:31:20.216193 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/tfqx 330\nI0409 21:31:20.416191 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/2t2w 329\nI0409 21:31:20.616221 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/6t4 262\nI0409 21:31:20.816202 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/nwm 442\nI0409 21:31:21.016220 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/4g78 516\nI0409 21:31:21.216141 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/tncf 288\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 9 21:31:21.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8760' Apr 9 21:31:23.862: INFO: stderr: "" Apr 9 21:31:23.862: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:23.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8760" for this suite. • [SLOW TEST:10.044 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":115,"skipped":1896,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:23.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-031d5532-8438-4bbd-a753-7496f303f42a STEP: Creating a pod to test consume configMaps Apr 9 21:31:24.002: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677" in namespace "projected-6440" to be "success or failure" Apr 9 21:31:24.005: INFO: Pod "pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677": Phase="Pending", Reason="", readiness=false. Elapsed: 3.292858ms Apr 9 21:31:26.009: INFO: Pod "pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007301066s Apr 9 21:31:28.013: INFO: Pod "pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011478185s STEP: Saw pod success Apr 9 21:31:28.013: INFO: Pod "pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677" satisfied condition "success or failure" Apr 9 21:31:28.016: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677 container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:31:28.046: INFO: Waiting for pod pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677 to disappear Apr 9 21:31:28.058: INFO: Pod pod-projected-configmaps-ac065d63-729a-43cd-a1d5-33b2fac86677 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:28.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6440" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:28.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 9 21:31:28.152: INFO: Waiting up to 5m0s for pod "pod-db695c08-5170-46a4-a17c-7eb312047989" in namespace "emptydir-8039" to be "success or failure" Apr 9 21:31:28.155: INFO: Pod "pod-db695c08-5170-46a4-a17c-7eb312047989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895201ms Apr 9 21:31:30.158: INFO: Pod "pod-db695c08-5170-46a4-a17c-7eb312047989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005961906s Apr 9 21:31:32.163: INFO: Pod "pod-db695c08-5170-46a4-a17c-7eb312047989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01033016s STEP: Saw pod success Apr 9 21:31:32.163: INFO: Pod "pod-db695c08-5170-46a4-a17c-7eb312047989" satisfied condition "success or failure" Apr 9 21:31:32.166: INFO: Trying to get logs from node jerma-worker pod pod-db695c08-5170-46a4-a17c-7eb312047989 container test-container: STEP: delete the pod Apr 9 21:31:32.202: INFO: Waiting for pod pod-db695c08-5170-46a4-a17c-7eb312047989 to disappear Apr 9 21:31:32.214: INFO: Pod pod-db695c08-5170-46a4-a17c-7eb312047989 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:32.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8039" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:32.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 9 21:31:32.300: INFO: Waiting up to 5m0s for pod "downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967" in namespace "downward-api-4986" to be "success or failure" Apr 9 21:31:32.304: INFO: Pod "downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008376ms Apr 9 21:31:34.308: INFO: Pod "downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007913267s Apr 9 21:31:36.312: INFO: Pod "downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011970325s STEP: Saw pod success Apr 9 21:31:36.312: INFO: Pod "downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967" satisfied condition "success or failure" Apr 9 21:31:36.315: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967 container dapi-container: STEP: delete the pod Apr 9 21:31:36.336: INFO: Waiting for pod downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967 to disappear Apr 9 21:31:36.340: INFO: Pod downward-api-f9a5692c-9022-4e84-84ba-b33aa049c967 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:36.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4986" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1959,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:36.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-5e28be60-c912-4722-88a5-36197b94d65f STEP: Creating a pod to test consume secrets Apr 9 21:31:36.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a" in namespace "projected-1693" to be "success or failure" Apr 9 21:31:36.450: INFO: Pod "pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.961696ms Apr 9 21:31:38.454: INFO: Pod "pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006957416s Apr 9 21:31:40.457: INFO: Pod "pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010762797s STEP: Saw pod success Apr 9 21:31:40.457: INFO: Pod "pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a" satisfied condition "success or failure" Apr 9 21:31:40.460: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a container projected-secret-volume-test: STEP: delete the pod Apr 9 21:31:40.491: INFO: Waiting for pod pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a to disappear Apr 9 21:31:40.497: INFO: Pod pod-projected-secrets-76b79cbb-e390-4d74-b6c7-82f6ca52df8a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1693" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1975,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:40.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:31:40.591: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 9 21:31:43.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 create -f -' Apr 9 21:31:46.357: INFO: stderr: "" Apr 9 21:31:46.357: INFO: stdout: "e2e-test-crd-publish-openapi-1370-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 9 21:31:46.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-1370-crds test-cr' Apr 9 21:31:46.458: INFO: stderr: "" Apr 9 21:31:46.458: INFO: stdout: "e2e-test-crd-publish-openapi-1370-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 9 21:31:46.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 apply -f -' Apr 9 21:31:46.705: INFO: stderr: "" Apr 9 21:31:46.705: INFO: stdout: "e2e-test-crd-publish-openapi-1370-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 9 21:31:46.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-1370-crds test-cr' Apr 9 21:31:46.831: INFO: stderr: "" Apr 9 21:31:46.831: INFO: stdout: "e2e-test-crd-publish-openapi-1370-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 9 21:31:46.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1370-crds' Apr 9 21:31:47.086: INFO: stderr: "" Apr 9 21:31:47.086: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1370-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:31:49.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8188" for this suite. • [SLOW TEST:9.453 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":120,"skipped":1975,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:31:49.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-634bdc5b-566d-4bf2-a95f-105ceca4600d STEP: Creating configMap with name cm-test-opt-upd-0f36492f-4bae-4be1-b3ee-9161b9e890bf STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-634bdc5b-566d-4bf2-a95f-105ceca4600d STEP: Updating configmap cm-test-opt-upd-0f36492f-4bae-4be1-b3ee-9161b9e890bf STEP: Creating configMap with name cm-test-opt-create-19d103d1-57bf-4b9b-b79b-281de3e23db9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:33:22.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1563" for this suite. • [SLOW TEST:92.628 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1984,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:33:22.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:33:22.683: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 9 21:33:24.724: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:33:25.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6821" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":122,"skipped":1989,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:33:25.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 9 21:33:26.219: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4368 /api/v1/namespaces/watch-4368/configmaps/e2e-watch-test-watch-closed 7a72578e-1aca-4444-b133-e3434bf8543d 6775164 0 2020-04-09 21:33:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 9 21:33:26.219: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4368 /api/v1/namespaces/watch-4368/configmaps/e2e-watch-test-watch-closed 7a72578e-1aca-4444-b133-e3434bf8543d 6775165 0 2020-04-09 21:33:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 9 21:33:26.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4368 /api/v1/namespaces/watch-4368/configmaps/e2e-watch-test-watch-closed 7a72578e-1aca-4444-b133-e3434bf8543d 6775167 0 2020-04-09 21:33:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 9 21:33:26.404: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4368 /api/v1/namespaces/watch-4368/configmaps/e2e-watch-test-watch-closed 7a72578e-1aca-4444-b133-e3434bf8543d 6775169 0 2020-04-09 21:33:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:33:26.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4368" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":123,"skipped":1991,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:33:26.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-621fa660-a2cd-49f8-b33c-50253cc91d29 in namespace container-probe-5993 Apr 9 21:33:30.844: INFO: Started pod busybox-621fa660-a2cd-49f8-b33c-50253cc91d29 in namespace container-probe-5993 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 21:33:30.848: INFO: Initial restart count of pod busybox-621fa660-a2cd-49f8-b33c-50253cc91d29 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:37:31.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5993" for this suite. • [SLOW TEST:245.021 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2004,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:37:31.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c in namespace container-probe-9628 Apr 9 21:37:35.529: INFO: Started pod liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c in namespace container-probe-9628 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 21:37:35.532: INFO: Initial restart count of pod liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is 0 Apr 9 21:37:51.568: INFO: Restart count of pod container-probe-9628/liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is now 1 (16.036261454s elapsed) Apr 9 21:38:11.609: INFO: Restart count of pod container-probe-9628/liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is now 2 (36.077158267s elapsed) Apr 9 21:38:31.651: INFO: Restart count of pod container-probe-9628/liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is now 3 (56.118482784s elapsed) Apr 9 21:38:51.703: INFO: Restart count of pod container-probe-9628/liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is now 4 (1m16.170497361s elapsed) Apr 9 21:40:01.920: INFO: Restart count of pod container-probe-9628/liveness-852a9c72-27a5-4495-94bb-0d2e29a7246c is now 5 (2m26.388150359s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:01.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9628" for this suite. • [SLOW TEST:150.514 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2017,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:01.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1590 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1590 I0409 21:40:02.378384 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1590, replica count: 2 I0409 21:40:05.428836 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 21:40:08.429062 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 21:40:08.429: INFO: Creating new exec pod Apr 9 21:40:13.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1590 execpodlbtld -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 9 21:40:13.733: INFO: stderr: "I0409 21:40:13.631745 1745 log.go:172] (0xc0006a89a0) (0xc000980140) Create stream\nI0409 21:40:13.631816 1745 log.go:172] (0xc0006a89a0) (0xc000980140) Stream added, broadcasting: 1\nI0409 21:40:13.634202 1745 log.go:172] (0xc0006a89a0) Reply frame received for 1\nI0409 21:40:13.634227 1745 log.go:172] (0xc0006a89a0) (0xc000a08000) Create stream\nI0409 21:40:13.634235 1745 log.go:172] (0xc0006a89a0) (0xc000a08000) Stream added, broadcasting: 3\nI0409 21:40:13.635065 1745 log.go:172] (0xc0006a89a0) Reply frame received for 3\nI0409 21:40:13.635134 1745 log.go:172] (0xc0006a89a0) (0xc0002a1400) Create stream\nI0409 21:40:13.635151 1745 log.go:172] (0xc0006a89a0) (0xc0002a1400) Stream added, broadcasting: 5\nI0409 21:40:13.635825 1745 log.go:172] (0xc0006a89a0) Reply frame received for 5\nI0409 21:40:13.726255 1745 log.go:172] (0xc0006a89a0) Data frame received for 5\nI0409 21:40:13.726308 1745 log.go:172] (0xc0002a1400) (5) Data frame handling\nI0409 21:40:13.726334 1745 log.go:172] (0xc0002a1400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0409 21:40:13.726439 1745 log.go:172] (0xc0006a89a0) Data frame received for 5\nI0409 21:40:13.726476 1745 log.go:172] (0xc0002a1400) (5) Data frame handling\nI0409 21:40:13.726503 1745 log.go:172] (0xc0002a1400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0409 21:40:13.726736 1745 log.go:172] (0xc0006a89a0) Data frame received for 5\nI0409 21:40:13.726769 1745 log.go:172] (0xc0002a1400) (5) Data frame handling\nI0409 21:40:13.726921 1745 log.go:172] (0xc0006a89a0) Data frame received for 3\nI0409 21:40:13.726944 1745 log.go:172] (0xc000a08000) (3) Data frame handling\nI0409 21:40:13.728358 1745 log.go:172] (0xc0006a89a0) Data frame received for 1\nI0409 21:40:13.728375 1745 log.go:172] (0xc000980140) (1) Data frame handling\nI0409 21:40:13.728395 1745 log.go:172] (0xc000980140) (1) Data frame sent\nI0409 21:40:13.728421 1745 log.go:172] (0xc0006a89a0) (0xc000980140) Stream removed, broadcasting: 1\nI0409 21:40:13.728435 1745 log.go:172] (0xc0006a89a0) Go away received\nI0409 21:40:13.728913 1745 log.go:172] (0xc0006a89a0) (0xc000980140) Stream removed, broadcasting: 1\nI0409 21:40:13.728936 1745 log.go:172] (0xc0006a89a0) (0xc000a08000) Stream removed, broadcasting: 3\nI0409 21:40:13.728951 1745 log.go:172] (0xc0006a89a0) (0xc0002a1400) Stream removed, broadcasting: 5\n" Apr 9 21:40:13.734: INFO: stdout: "" Apr 9 21:40:13.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1590 execpodlbtld -- /bin/sh -x -c nc -zv -t -w 2 10.108.42.170 80' Apr 9 21:40:13.953: INFO: stderr: "I0409 21:40:13.868741 1767 log.go:172] (0xc00061ca50) (0xc000619e00) Create stream\nI0409 21:40:13.868805 1767 log.go:172] (0xc00061ca50) (0xc000619e00) Stream added, broadcasting: 1\nI0409 21:40:13.871815 1767 log.go:172] (0xc00061ca50) Reply frame received for 1\nI0409 21:40:13.871861 1767 log.go:172] (0xc00061ca50) (0xc0006dc000) Create stream\nI0409 21:40:13.871878 1767 log.go:172] (0xc00061ca50) (0xc0006dc000) Stream added, broadcasting: 3\nI0409 21:40:13.872883 1767 log.go:172] (0xc00061ca50) Reply frame received for 3\nI0409 21:40:13.872918 1767 log.go:172] (0xc00061ca50) (0xc0005d4000) Create stream\nI0409 21:40:13.872929 1767 log.go:172] (0xc00061ca50) (0xc0005d4000) Stream added, broadcasting: 5\nI0409 21:40:13.874081 1767 log.go:172] (0xc00061ca50) Reply frame received for 5\nI0409 21:40:13.947760 1767 log.go:172] (0xc00061ca50) Data frame received for 5\nI0409 21:40:13.947809 1767 log.go:172] (0xc0005d4000) (5) Data frame handling\nI0409 21:40:13.947850 1767 log.go:172] (0xc0005d4000) (5) Data frame sent\nI0409 21:40:13.947867 1767 log.go:172] (0xc00061ca50) Data frame received for 5\nI0409 21:40:13.947881 1767 log.go:172] (0xc0005d4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.42.170 80\nConnection to 10.108.42.170 80 port [tcp/http] succeeded!\nI0409 21:40:13.947901 1767 log.go:172] (0xc00061ca50) Data frame received for 3\nI0409 21:40:13.947915 1767 log.go:172] (0xc0006dc000) (3) Data frame handling\nI0409 21:40:13.949230 1767 log.go:172] (0xc00061ca50) Data frame received for 1\nI0409 21:40:13.949248 1767 log.go:172] (0xc000619e00) (1) Data frame handling\nI0409 21:40:13.949262 1767 log.go:172] (0xc000619e00) (1) Data frame sent\nI0409 21:40:13.949331 1767 log.go:172] (0xc00061ca50) (0xc000619e00) Stream removed, broadcasting: 1\nI0409 21:40:13.949479 1767 log.go:172] (0xc00061ca50) Go away received\nI0409 21:40:13.949746 1767 log.go:172] (0xc00061ca50) (0xc000619e00) Stream removed, broadcasting: 1\nI0409 21:40:13.949779 1767 log.go:172] (0xc00061ca50) (0xc0006dc000) Stream removed, broadcasting: 3\nI0409 21:40:13.949794 1767 log.go:172] (0xc00061ca50) (0xc0005d4000) Stream removed, broadcasting: 5\n" Apr 9 21:40:13.953: INFO: stdout: "" Apr 9 21:40:13.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1590 execpodlbtld -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32321' Apr 9 21:40:14.174: INFO: stderr: "I0409 21:40:14.088696 1788 log.go:172] (0xc00063ed10) (0xc000ab6000) Create stream\nI0409 21:40:14.088754 1788 log.go:172] (0xc00063ed10) (0xc000ab6000) Stream added, broadcasting: 1\nI0409 21:40:14.092195 1788 log.go:172] (0xc00063ed10) Reply frame received for 1\nI0409 21:40:14.092261 1788 log.go:172] (0xc00063ed10) (0xc0005e9b80) Create stream\nI0409 21:40:14.092282 1788 log.go:172] (0xc00063ed10) (0xc0005e9b80) Stream added, broadcasting: 3\nI0409 21:40:14.093314 1788 log.go:172] (0xc00063ed10) Reply frame received for 3\nI0409 21:40:14.093381 1788 log.go:172] (0xc00063ed10) (0xc000ab60a0) Create stream\nI0409 21:40:14.093418 1788 log.go:172] (0xc00063ed10) (0xc000ab60a0) Stream added, broadcasting: 5\nI0409 21:40:14.094309 1788 log.go:172] (0xc00063ed10) Reply frame received for 5\nI0409 21:40:14.166330 1788 log.go:172] (0xc00063ed10) Data frame received for 3\nI0409 21:40:14.166367 1788 log.go:172] (0xc0005e9b80) (3) Data frame handling\nI0409 21:40:14.166383 1788 log.go:172] (0xc00063ed10) Data frame received for 5\nI0409 21:40:14.166389 1788 log.go:172] (0xc000ab60a0) (5) Data frame handling\nI0409 21:40:14.166398 1788 log.go:172] (0xc000ab60a0) (5) Data frame sent\nI0409 21:40:14.166404 1788 log.go:172] (0xc00063ed10) Data frame received for 5\nI0409 21:40:14.166408 1788 log.go:172] (0xc000ab60a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32321\nConnection to 172.17.0.10 32321 port [tcp/32321] succeeded!\nI0409 21:40:14.168120 1788 log.go:172] (0xc00063ed10) Data frame received for 1\nI0409 21:40:14.168146 1788 log.go:172] (0xc000ab6000) (1) Data frame handling\nI0409 21:40:14.168161 1788 log.go:172] (0xc000ab6000) (1) Data frame sent\nI0409 21:40:14.168174 1788 log.go:172] (0xc00063ed10) (0xc000ab6000) Stream removed, broadcasting: 1\nI0409 21:40:14.168191 1788 log.go:172] (0xc00063ed10) Go away received\nI0409 21:40:14.168617 1788 log.go:172] (0xc00063ed10) (0xc000ab6000) Stream removed, broadcasting: 1\nI0409 21:40:14.168693 1788 log.go:172] (0xc00063ed10) (0xc0005e9b80) Stream removed, broadcasting: 3\nI0409 21:40:14.168709 1788 log.go:172] (0xc00063ed10) (0xc000ab60a0) Stream removed, broadcasting: 5\n" Apr 9 21:40:14.174: INFO: stdout: "" Apr 9 21:40:14.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1590 execpodlbtld -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32321' Apr 9 21:40:14.378: INFO: stderr: "I0409 21:40:14.310842 1810 log.go:172] (0xc0003d4790) (0xc000631ea0) Create stream\nI0409 21:40:14.310899 1810 log.go:172] (0xc0003d4790) (0xc000631ea0) Stream added, broadcasting: 1\nI0409 21:40:14.313403 1810 log.go:172] (0xc0003d4790) Reply frame received for 1\nI0409 21:40:14.313472 1810 log.go:172] (0xc0003d4790) (0xc000598780) Create stream\nI0409 21:40:14.313496 1810 log.go:172] (0xc0003d4790) (0xc000598780) Stream added, broadcasting: 3\nI0409 21:40:14.314694 1810 log.go:172] (0xc0003d4790) Reply frame received for 3\nI0409 21:40:14.314742 1810 log.go:172] (0xc0003d4790) (0xc000631f40) Create stream\nI0409 21:40:14.314761 1810 log.go:172] (0xc0003d4790) (0xc000631f40) Stream added, broadcasting: 5\nI0409 21:40:14.315662 1810 log.go:172] (0xc0003d4790) Reply frame received for 5\nI0409 21:40:14.373262 1810 log.go:172] (0xc0003d4790) Data frame received for 3\nI0409 21:40:14.373306 1810 log.go:172] (0xc000598780) (3) Data frame handling\nI0409 21:40:14.373337 1810 log.go:172] (0xc0003d4790) Data frame received for 5\nI0409 21:40:14.373356 1810 log.go:172] (0xc000631f40) (5) Data frame handling\nI0409 21:40:14.373380 1810 log.go:172] (0xc000631f40) (5) Data frame sent\nI0409 21:40:14.373399 1810 log.go:172] (0xc0003d4790) Data frame received for 5\nI0409 21:40:14.373412 1810 log.go:172] (0xc000631f40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32321\nConnection to 172.17.0.8 32321 port [tcp/32321] succeeded!\nI0409 21:40:14.374691 1810 log.go:172] (0xc0003d4790) Data frame received for 1\nI0409 21:40:14.374724 1810 log.go:172] (0xc000631ea0) (1) Data frame handling\nI0409 21:40:14.374751 1810 log.go:172] (0xc000631ea0) (1) Data frame sent\nI0409 21:40:14.374773 1810 log.go:172] (0xc0003d4790) (0xc000631ea0) Stream removed, broadcasting: 1\nI0409 21:40:14.374803 1810 log.go:172] (0xc0003d4790) Go away received\nI0409 21:40:14.375034 1810 log.go:172] (0xc0003d4790) (0xc000631ea0) Stream removed, broadcasting: 1\nI0409 21:40:14.375046 1810 log.go:172] (0xc0003d4790) (0xc000598780) Stream removed, broadcasting: 3\nI0409 21:40:14.375052 1810 log.go:172] (0xc0003d4790) (0xc000631f40) Stream removed, broadcasting: 5\n" Apr 9 21:40:14.378: INFO: stdout: "" Apr 9 21:40:14.378: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:14.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1590" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.489 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":126,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:14.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:40:14.563: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:15.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5794" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":127,"skipped":2063,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:15.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b2680165-054f-456d-9eea-43474434de40 STEP: Creating a pod to test consume secrets Apr 9 21:40:15.820: INFO: Waiting up to 5m0s for pod "pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100" in namespace "secrets-6189" to be "success or failure" Apr 9 21:40:15.825: INFO: Pod "pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440976ms Apr 9 21:40:17.867: INFO: Pod "pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046220841s Apr 9 21:40:19.893: INFO: Pod "pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07282138s STEP: Saw pod success Apr 9 21:40:19.893: INFO: Pod "pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100" satisfied condition "success or failure" Apr 9 21:40:19.911: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100 container secret-env-test: STEP: delete the pod Apr 9 21:40:20.041: INFO: Waiting for pod pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100 to disappear Apr 9 21:40:20.060: INFO: Pod pod-secrets-6f0158a1-db06-4ac0-b1ff-87a79ae1d100 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:20.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6189" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:20.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-abe12ce3-1b2c-4ed0-b03f-47469b26c683 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-abe12ce3-1b2c-4ed0-b03f-47469b26c683 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:26.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1767" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2103,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:26.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 9 21:40:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2617 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 9 21:40:28.870: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0409 21:40:28.794643 1833 log.go:172] (0xc000a5a0b0) (0xc000a160a0) Create stream\nI0409 21:40:28.794704 1833 log.go:172] (0xc000a5a0b0) (0xc000a160a0) Stream added, broadcasting: 1\nI0409 21:40:28.796406 1833 log.go:172] (0xc000a5a0b0) Reply frame received for 1\nI0409 21:40:28.796460 1833 log.go:172] (0xc000a5a0b0) (0xc000a02000) Create stream\nI0409 21:40:28.796486 1833 log.go:172] (0xc000a5a0b0) (0xc000a02000) Stream added, broadcasting: 3\nI0409 21:40:28.798003 1833 log.go:172] (0xc000a5a0b0) Reply frame received for 3\nI0409 21:40:28.798189 1833 log.go:172] (0xc000a5a0b0) (0xc000a16140) Create stream\nI0409 21:40:28.798226 1833 log.go:172] (0xc000a5a0b0) (0xc000a16140) Stream added, broadcasting: 5\nI0409 21:40:28.799199 1833 log.go:172] (0xc000a5a0b0) Reply frame received for 5\nI0409 21:40:28.799221 1833 log.go:172] (0xc000a5a0b0) (0xc000a020a0) Create stream\nI0409 21:40:28.799229 1833 log.go:172] (0xc000a5a0b0) (0xc000a020a0) Stream added, broadcasting: 7\nI0409 21:40:28.800060 1833 log.go:172] (0xc000a5a0b0) Reply frame received for 7\nI0409 21:40:28.800255 1833 log.go:172] (0xc000a02000) (3) Writing data frame\nI0409 21:40:28.800358 1833 log.go:172] (0xc000a02000) (3) Writing data frame\nI0409 21:40:28.801369 1833 log.go:172] (0xc000a5a0b0) Data frame received for 5\nI0409 21:40:28.801388 1833 log.go:172] (0xc000a16140) (5) Data frame handling\nI0409 21:40:28.801402 1833 log.go:172] (0xc000a16140) (5) Data frame sent\nI0409 21:40:28.802128 1833 log.go:172] (0xc000a5a0b0) Data frame received for 5\nI0409 21:40:28.802142 1833 log.go:172] (0xc000a16140) (5) Data frame handling\nI0409 21:40:28.802152 1833 log.go:172] (0xc000a16140) (5) Data frame sent\nI0409 21:40:28.847108 1833 log.go:172] (0xc000a5a0b0) Data frame received for 7\nI0409 21:40:28.847157 1833 log.go:172] (0xc000a020a0) (7) Data frame handling\nI0409 21:40:28.847188 1833 log.go:172] (0xc000a5a0b0) Data frame received for 5\nI0409 21:40:28.847265 1833 log.go:172] (0xc000a16140) (5) Data frame handling\nI0409 21:40:28.847784 1833 log.go:172] (0xc000a5a0b0) Data frame received for 1\nI0409 21:40:28.848037 1833 log.go:172] (0xc000a5a0b0) (0xc000a02000) Stream removed, broadcasting: 3\nI0409 21:40:28.848122 1833 log.go:172] (0xc000a160a0) (1) Data frame handling\nI0409 21:40:28.848190 1833 log.go:172] (0xc000a160a0) (1) Data frame sent\nI0409 21:40:28.848271 1833 log.go:172] (0xc000a5a0b0) (0xc000a160a0) Stream removed, broadcasting: 1\nI0409 21:40:28.848305 1833 log.go:172] (0xc000a5a0b0) Go away received\nI0409 21:40:28.848655 1833 log.go:172] (0xc000a5a0b0) (0xc000a160a0) Stream removed, broadcasting: 1\nI0409 21:40:28.848683 1833 log.go:172] (0xc000a5a0b0) (0xc000a02000) Stream removed, broadcasting: 3\nI0409 21:40:28.848698 1833 log.go:172] (0xc000a5a0b0) (0xc000a16140) Stream removed, broadcasting: 5\nI0409 21:40:28.848717 1833 log.go:172] (0xc000a5a0b0) (0xc000a020a0) Stream removed, broadcasting: 7\n" Apr 9 21:40:28.871: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:30.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2617" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":130,"skipped":2108,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:30.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 9 21:40:30.939: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 9 21:40:35.955: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:40:35.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4272" for this suite. • [SLOW TEST:5.224 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":131,"skipped":2114,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:40:36.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9562 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9562;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9562 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9562;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9562.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9562.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9562.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9562.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9562.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9562.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9562.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 86.158.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.158.86_udp@PTR;check="$$(dig +tcp +noall +answer +search 86.158.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.158.86_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9562 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9562;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9562 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9562;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9562.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9562.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9562.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9562.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9562.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9562.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9562.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9562.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9562.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 86.158.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.158.86_udp@PTR;check="$$(dig +tcp +noall +answer +search 86.158.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.158.86_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 21:40:42.414: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.417: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.423: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.433: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.436: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.643: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.647: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.650: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.657: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.660: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.664: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.666: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:42.687: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:40:47.692: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.695: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.711: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.714: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.733: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.737: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.740: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.743: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.746: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.749: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.755: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:47.773: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:40:52.692: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.696: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.738: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.741: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.744: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.747: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.750: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.756: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.782: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.785: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:52.803: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:40:57.692: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.697: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.704: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.735: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.738: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.741: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.744: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.747: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.753: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.757: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:40:57.774: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:41:02.693: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.696: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.711: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.713: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.734: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.736: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.738: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.740: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.743: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.745: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.747: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.749: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:02.765: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:41:07.693: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.696: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.735: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.738: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.740: INFO: Unable to read jessie_udp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.743: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562 from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.745: INFO: Unable to read jessie_udp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.754: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc from pod dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64: the server could not find the requested resource (get pods dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64) Apr 9 21:41:07.769: INFO: Lookups using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9562 wheezy_tcp@dns-test-service.dns-9562 wheezy_udp@dns-test-service.dns-9562.svc wheezy_tcp@dns-test-service.dns-9562.svc wheezy_udp@_http._tcp.dns-test-service.dns-9562.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9562.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9562 jessie_tcp@dns-test-service.dns-9562 jessie_udp@dns-test-service.dns-9562.svc jessie_tcp@dns-test-service.dns-9562.svc jessie_udp@_http._tcp.dns-test-service.dns-9562.svc jessie_tcp@_http._tcp.dns-test-service.dns-9562.svc] Apr 9 21:41:12.785: INFO: DNS probes using dns-9562/dns-test-0faa6bb0-8bbc-4ff6-b1e3-589008440f64 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:41:13.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9562" for this suite. • [SLOW TEST:37.255 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":132,"skipped":2132,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:41:13.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 9 21:41:21.740: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 21:41:21.744: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 21:41:23.745: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 21:41:23.747: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 21:41:25.745: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 21:41:25.749: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 21:41:27.745: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 21:41:27.750: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 21:41:29.745: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 21:41:29.749: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:41:29.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3221" for this suite. • [SLOW TEST:16.410 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2139,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:41:29.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:41:29.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1" in namespace "downward-api-552" to be "success or failure" Apr 9 21:41:29.871: INFO: Pod "downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.790517ms Apr 9 21:41:31.881: INFO: Pod "downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018925205s Apr 9 21:41:33.885: INFO: Pod "downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02364545s STEP: Saw pod success Apr 9 21:41:33.885: INFO: Pod "downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1" satisfied condition "success or failure" Apr 9 21:41:33.888: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1 container client-container: STEP: delete the pod Apr 9 21:41:33.911: INFO: Waiting for pod downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1 to disappear Apr 9 21:41:33.917: INFO: Pod downwardapi-volume-915d78bf-d8f0-4bd8-8ddc-8f3cfc8519f1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:41:33.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-552" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2160,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:41:33.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 9 21:41:34.039: INFO: Waiting up to 5m0s for pod "client-containers-520a242e-35b7-46b2-accc-a8696b1bee98" in namespace "containers-70" to be "success or failure" Apr 9 21:41:34.042: INFO: Pod "client-containers-520a242e-35b7-46b2-accc-a8696b1bee98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.04771ms Apr 9 21:41:36.062: INFO: Pod "client-containers-520a242e-35b7-46b2-accc-a8696b1bee98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022488117s Apr 9 21:41:38.066: INFO: Pod "client-containers-520a242e-35b7-46b2-accc-a8696b1bee98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027092828s STEP: Saw pod success Apr 9 21:41:38.066: INFO: Pod "client-containers-520a242e-35b7-46b2-accc-a8696b1bee98" satisfied condition "success or failure" Apr 9 21:41:38.069: INFO: Trying to get logs from node jerma-worker2 pod client-containers-520a242e-35b7-46b2-accc-a8696b1bee98 container test-container: STEP: delete the pod Apr 9 21:41:38.141: INFO: Waiting for pod client-containers-520a242e-35b7-46b2-accc-a8696b1bee98 to disappear Apr 9 21:41:38.211: INFO: Pod client-containers-520a242e-35b7-46b2-accc-a8696b1bee98 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:41:38.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-70" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2161,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:41:38.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6346.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6346.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6346.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6346.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.244.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.244.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.244.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.244.35_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6346.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6346.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6346.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6346.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6346.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6346.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.244.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.244.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.244.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.244.35_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 21:41:44.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.683: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.749: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.752: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.755: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.758: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:44.775: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:41:49.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.788: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.813: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.815: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.818: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:49.842: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:41:54.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.783: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.787: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.819: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.824: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.827: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.830: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:54.848: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:41:59.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.788: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.814: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.817: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.820: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:41:59.841: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:42:04.779: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.783: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.785: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.788: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.809: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.812: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.814: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.817: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:04.834: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:42:09.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.788: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.813: INFO: Unable to read jessie_udp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.815: INFO: Unable to read jessie_tcp@dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.818: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local from pod dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46: the server could not find the requested resource (get pods dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46) Apr 9 21:42:09.838: INFO: Lookups using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 failed for: [wheezy_udp@dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@dns-test-service.dns-6346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_udp@dns-test-service.dns-6346.svc.cluster.local jessie_tcp@dns-test-service.dns-6346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6346.svc.cluster.local] Apr 9 21:42:14.859: INFO: DNS probes using dns-6346/dns-test-419d72f7-8ab5-4cba-82ae-5e192813dd46 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:42:15.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6346" for this suite. • [SLOW TEST:37.230 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":136,"skipped":2171,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:42:15.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:42:15.539: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 9 21:42:15.547: INFO: Number of nodes with available pods: 0 Apr 9 21:42:15.547: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 9 21:42:15.574: INFO: Number of nodes with available pods: 0 Apr 9 21:42:15.574: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:16.642: INFO: Number of nodes with available pods: 0 Apr 9 21:42:16.642: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:17.578: INFO: Number of nodes with available pods: 0 Apr 9 21:42:17.578: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:18.578: INFO: Number of nodes with available pods: 0 Apr 9 21:42:18.578: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:19.589: INFO: Number of nodes with available pods: 1 Apr 9 21:42:19.589: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 9 21:42:19.641: INFO: Number of nodes with available pods: 1 Apr 9 21:42:19.641: INFO: Number of running nodes: 0, number of available pods: 1 Apr 9 21:42:20.644: INFO: Number of nodes with available pods: 0 Apr 9 21:42:20.644: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 9 21:42:20.651: INFO: Number of nodes with available pods: 0 Apr 9 21:42:20.651: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:21.656: INFO: Number of nodes with available pods: 0 Apr 9 21:42:21.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:22.654: INFO: Number of nodes with available pods: 0 Apr 9 21:42:22.654: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:23.655: INFO: Number of nodes with available pods: 0 Apr 9 21:42:23.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:24.656: INFO: Number of nodes with available pods: 0 Apr 9 21:42:24.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:25.677: INFO: Number of nodes with available pods: 0 Apr 9 21:42:25.677: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:26.656: INFO: Number of nodes with available pods: 0 Apr 9 21:42:26.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:27.656: INFO: Number of nodes with available pods: 0 Apr 9 21:42:27.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:28.655: INFO: Number of nodes with available pods: 0 Apr 9 21:42:28.655: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:29.655: INFO: Number of nodes with available pods: 0 Apr 9 21:42:29.655: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:30.655: INFO: Number of nodes with available pods: 0 Apr 9 21:42:30.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:31.656: INFO: Number of nodes with available pods: 0 Apr 9 21:42:31.656: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:42:32.656: INFO: Number of nodes with available pods: 1 Apr 9 21:42:32.656: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-236, will wait for the garbage collector to delete the pods Apr 9 21:42:32.720: INFO: Deleting DaemonSet.extensions daemon-set took: 6.294281ms Apr 9 21:42:33.021: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241888ms Apr 9 21:42:36.124: INFO: Number of nodes with available pods: 0 Apr 9 21:42:36.124: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 21:42:36.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-236/daemonsets","resourceVersion":"6777384"},"items":null} Apr 9 21:42:36.129: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-236/pods","resourceVersion":"6777384"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:42:36.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-236" for this suite. • [SLOW TEST:20.720 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":137,"skipped":2181,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:42:36.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 9 21:42:36.256: INFO: >>> kubeConfig: /root/.kube/config Apr 9 21:42:39.162: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:42:48.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5640" for this suite. • [SLOW TEST:12.534 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":138,"skipped":2186,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:42:48.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4719 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4719 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4719 Apr 9 21:42:48.830: INFO: Found 0 stateful pods, waiting for 1 Apr 9 21:42:58.834: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 9 21:42:58.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:43:01.476: INFO: stderr: "I0409 21:43:01.361630 1859 log.go:172] (0xc00002f3f0) (0xc00084c000) Create stream\nI0409 21:43:01.361668 1859 log.go:172] (0xc00002f3f0) (0xc00084c000) Stream added, broadcasting: 1\nI0409 21:43:01.363970 1859 log.go:172] (0xc00002f3f0) Reply frame received for 1\nI0409 21:43:01.364029 1859 log.go:172] (0xc00002f3f0) (0xc00084a000) Create stream\nI0409 21:43:01.364056 1859 log.go:172] (0xc00002f3f0) (0xc00084a000) Stream added, broadcasting: 3\nI0409 21:43:01.364923 1859 log.go:172] (0xc00002f3f0) Reply frame received for 3\nI0409 21:43:01.364951 1859 log.go:172] (0xc00002f3f0) (0xc00084a0a0) Create stream\nI0409 21:43:01.364959 1859 log.go:172] (0xc00002f3f0) (0xc00084a0a0) Stream added, broadcasting: 5\nI0409 21:43:01.365822 1859 log.go:172] (0xc00002f3f0) Reply frame received for 5\nI0409 21:43:01.440911 1859 log.go:172] (0xc00002f3f0) Data frame received for 5\nI0409 21:43:01.440946 1859 log.go:172] (0xc00084a0a0) (5) Data frame handling\nI0409 21:43:01.440966 1859 log.go:172] (0xc00084a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:43:01.469404 1859 log.go:172] (0xc00002f3f0) Data frame received for 3\nI0409 21:43:01.469435 1859 log.go:172] (0xc00084a000) (3) Data frame handling\nI0409 21:43:01.469449 1859 log.go:172] (0xc00084a000) (3) Data frame sent\nI0409 21:43:01.469526 1859 log.go:172] (0xc00002f3f0) Data frame received for 3\nI0409 21:43:01.469538 1859 log.go:172] (0xc00084a000) (3) Data frame handling\nI0409 21:43:01.470064 1859 log.go:172] (0xc00002f3f0) Data frame received for 5\nI0409 21:43:01.470091 1859 log.go:172] (0xc00084a0a0) (5) Data frame handling\nI0409 21:43:01.471538 1859 log.go:172] (0xc00002f3f0) Data frame received for 1\nI0409 21:43:01.471557 1859 log.go:172] (0xc00084c000) (1) Data frame handling\nI0409 21:43:01.471563 1859 log.go:172] (0xc00084c000) (1) Data frame sent\nI0409 21:43:01.471573 1859 log.go:172] (0xc00002f3f0) (0xc00084c000) Stream removed, broadcasting: 1\nI0409 21:43:01.471583 1859 log.go:172] (0xc00002f3f0) Go away received\nI0409 21:43:01.472052 1859 log.go:172] (0xc00002f3f0) (0xc00084c000) Stream removed, broadcasting: 1\nI0409 21:43:01.472080 1859 log.go:172] (0xc00002f3f0) (0xc00084a000) Stream removed, broadcasting: 3\nI0409 21:43:01.472094 1859 log.go:172] (0xc00002f3f0) (0xc00084a0a0) Stream removed, broadcasting: 5\n" Apr 9 21:43:01.477: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:43:01.477: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:43:01.480: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 9 21:43:11.485: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:43:11.485: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:43:11.518: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:11.518: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC }] Apr 9 21:43:11.518: INFO: Apr 9 21:43:11.518: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 9 21:43:12.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977388852s Apr 9 21:43:13.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972195261s Apr 9 21:43:14.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967728135s Apr 9 21:43:15.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962964789s Apr 9 21:43:16.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958820616s Apr 9 21:43:17.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953736918s Apr 9 21:43:18.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948180253s Apr 9 21:43:19.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943019006s Apr 9 21:43:20.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.679755ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4719 Apr 9 21:43:21.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:43:21.821: INFO: stderr: "I0409 21:43:21.718930 1893 log.go:172] (0xc0001171e0) (0xc0003d9c20) Create stream\nI0409 21:43:21.718997 1893 log.go:172] (0xc0001171e0) (0xc0003d9c20) Stream added, broadcasting: 1\nI0409 21:43:21.721436 1893 log.go:172] (0xc0001171e0) Reply frame received for 1\nI0409 21:43:21.721484 1893 log.go:172] (0xc0001171e0) (0xc0007ea000) Create stream\nI0409 21:43:21.721504 1893 log.go:172] (0xc0001171e0) (0xc0007ea000) Stream added, broadcasting: 3\nI0409 21:43:21.722541 1893 log.go:172] (0xc0001171e0) Reply frame received for 3\nI0409 21:43:21.722589 1893 log.go:172] (0xc0001171e0) (0xc0003d9cc0) Create stream\nI0409 21:43:21.722604 1893 log.go:172] (0xc0001171e0) (0xc0003d9cc0) Stream added, broadcasting: 5\nI0409 21:43:21.723557 1893 log.go:172] (0xc0001171e0) Reply frame received for 5\nI0409 21:43:21.813231 1893 log.go:172] (0xc0001171e0) Data frame received for 5\nI0409 21:43:21.813268 1893 log.go:172] (0xc0003d9cc0) (5) Data frame handling\nI0409 21:43:21.813282 1893 log.go:172] (0xc0003d9cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:43:21.813307 1893 log.go:172] (0xc0001171e0) Data frame received for 3\nI0409 21:43:21.813332 1893 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0409 21:43:21.813357 1893 log.go:172] (0xc0007ea000) (3) Data frame sent\nI0409 21:43:21.813373 1893 log.go:172] (0xc0001171e0) Data frame received for 3\nI0409 21:43:21.813396 1893 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0409 21:43:21.813563 1893 log.go:172] (0xc0001171e0) Data frame received for 5\nI0409 21:43:21.813575 1893 log.go:172] (0xc0003d9cc0) (5) Data frame handling\nI0409 21:43:21.815162 1893 log.go:172] (0xc0001171e0) Data frame received for 1\nI0409 21:43:21.815181 1893 log.go:172] (0xc0003d9c20) (1) Data frame handling\nI0409 21:43:21.815190 1893 log.go:172] (0xc0003d9c20) (1) Data frame sent\nI0409 21:43:21.815198 1893 log.go:172] (0xc0001171e0) (0xc0003d9c20) Stream removed, broadcasting: 1\nI0409 21:43:21.815206 1893 log.go:172] (0xc0001171e0) Go away received\nI0409 21:43:21.815694 1893 log.go:172] (0xc0001171e0) (0xc0003d9c20) Stream removed, broadcasting: 1\nI0409 21:43:21.815721 1893 log.go:172] (0xc0001171e0) (0xc0007ea000) Stream removed, broadcasting: 3\nI0409 21:43:21.815734 1893 log.go:172] (0xc0001171e0) (0xc0003d9cc0) Stream removed, broadcasting: 5\n" Apr 9 21:43:21.821: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:43:21.821: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:43:21.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:43:22.042: INFO: stderr: "I0409 21:43:21.956657 1916 log.go:172] (0xc0009680b0) (0xc0007294a0) Create stream\nI0409 21:43:21.956723 1916 log.go:172] (0xc0009680b0) (0xc0007294a0) Stream added, broadcasting: 1\nI0409 21:43:21.959302 1916 log.go:172] (0xc0009680b0) Reply frame received for 1\nI0409 21:43:21.959337 1916 log.go:172] (0xc0009680b0) (0xc000a7a000) Create stream\nI0409 21:43:21.959347 1916 log.go:172] (0xc0009680b0) (0xc000a7a000) Stream added, broadcasting: 3\nI0409 21:43:21.960214 1916 log.go:172] (0xc0009680b0) Reply frame received for 3\nI0409 21:43:21.960239 1916 log.go:172] (0xc0009680b0) (0xc000635a40) Create stream\nI0409 21:43:21.960249 1916 log.go:172] (0xc0009680b0) (0xc000635a40) Stream added, broadcasting: 5\nI0409 21:43:21.961304 1916 log.go:172] (0xc0009680b0) Reply frame received for 5\nI0409 21:43:22.037656 1916 log.go:172] (0xc0009680b0) Data frame received for 5\nI0409 21:43:22.037684 1916 log.go:172] (0xc000635a40) (5) Data frame handling\nI0409 21:43:22.037699 1916 log.go:172] (0xc000635a40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0409 21:43:22.037726 1916 log.go:172] (0xc0009680b0) Data frame received for 3\nI0409 21:43:22.037737 1916 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0409 21:43:22.037749 1916 log.go:172] (0xc000a7a000) (3) Data frame sent\nI0409 21:43:22.037774 1916 log.go:172] (0xc0009680b0) Data frame received for 3\nI0409 21:43:22.037784 1916 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0409 21:43:22.037872 1916 log.go:172] (0xc0009680b0) Data frame received for 5\nI0409 21:43:22.037885 1916 log.go:172] (0xc000635a40) (5) Data frame handling\nI0409 21:43:22.039223 1916 log.go:172] (0xc0009680b0) Data frame received for 1\nI0409 21:43:22.039250 1916 log.go:172] (0xc0007294a0) (1) Data frame handling\nI0409 21:43:22.039284 1916 log.go:172] (0xc0007294a0) (1) Data frame sent\nI0409 21:43:22.039355 1916 log.go:172] (0xc0009680b0) (0xc0007294a0) Stream removed, broadcasting: 1\nI0409 21:43:22.039380 1916 log.go:172] (0xc0009680b0) Go away received\nI0409 21:43:22.039715 1916 log.go:172] (0xc0009680b0) (0xc0007294a0) Stream removed, broadcasting: 1\nI0409 21:43:22.039728 1916 log.go:172] (0xc0009680b0) (0xc000a7a000) Stream removed, broadcasting: 3\nI0409 21:43:22.039735 1916 log.go:172] (0xc0009680b0) (0xc000635a40) Stream removed, broadcasting: 5\n" Apr 9 21:43:22.043: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:43:22.043: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:43:22.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:43:22.271: INFO: stderr: "I0409 21:43:22.180115 1936 log.go:172] (0xc0003d9c30) (0xc000a665a0) Create stream\nI0409 21:43:22.180184 1936 log.go:172] (0xc0003d9c30) (0xc000a665a0) Stream added, broadcasting: 1\nI0409 21:43:22.185044 1936 log.go:172] (0xc0003d9c30) Reply frame received for 1\nI0409 21:43:22.185082 1936 log.go:172] (0xc0003d9c30) (0xc000698640) Create stream\nI0409 21:43:22.185099 1936 log.go:172] (0xc0003d9c30) (0xc000698640) Stream added, broadcasting: 3\nI0409 21:43:22.186278 1936 log.go:172] (0xc0003d9c30) Reply frame received for 3\nI0409 21:43:22.186318 1936 log.go:172] (0xc0003d9c30) (0xc0004c3400) Create stream\nI0409 21:43:22.186329 1936 log.go:172] (0xc0003d9c30) (0xc0004c3400) Stream added, broadcasting: 5\nI0409 21:43:22.187406 1936 log.go:172] (0xc0003d9c30) Reply frame received for 5\nI0409 21:43:22.265624 1936 log.go:172] (0xc0003d9c30) Data frame received for 5\nI0409 21:43:22.265651 1936 log.go:172] (0xc0004c3400) (5) Data frame handling\nI0409 21:43:22.265659 1936 log.go:172] (0xc0004c3400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0409 21:43:22.265674 1936 log.go:172] (0xc0003d9c30) Data frame received for 3\nI0409 21:43:22.265682 1936 log.go:172] (0xc000698640) (3) Data frame handling\nI0409 21:43:22.265691 1936 log.go:172] (0xc000698640) (3) Data frame sent\nI0409 21:43:22.265696 1936 log.go:172] (0xc0003d9c30) Data frame received for 3\nI0409 21:43:22.265700 1936 log.go:172] (0xc000698640) (3) Data frame handling\nI0409 21:43:22.265717 1936 log.go:172] (0xc0003d9c30) Data frame received for 5\nI0409 21:43:22.265731 1936 log.go:172] (0xc0004c3400) (5) Data frame handling\nI0409 21:43:22.267231 1936 log.go:172] (0xc0003d9c30) Data frame received for 1\nI0409 21:43:22.267246 1936 log.go:172] (0xc000a665a0) (1) Data frame handling\nI0409 21:43:22.267252 1936 log.go:172] (0xc000a665a0) (1) Data frame sent\nI0409 21:43:22.267280 1936 log.go:172] (0xc0003d9c30) (0xc000a665a0) Stream removed, broadcasting: 1\nI0409 21:43:22.267340 1936 log.go:172] (0xc0003d9c30) Go away received\nI0409 21:43:22.267504 1936 log.go:172] (0xc0003d9c30) (0xc000a665a0) Stream removed, broadcasting: 1\nI0409 21:43:22.267516 1936 log.go:172] (0xc0003d9c30) (0xc000698640) Stream removed, broadcasting: 3\nI0409 21:43:22.267522 1936 log.go:172] (0xc0003d9c30) (0xc0004c3400) Stream removed, broadcasting: 5\n" Apr 9 21:43:22.271: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:43:22.271: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:43:22.275: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 9 21:43:32.280: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:43:32.280: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:43:32.280: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 9 21:43:32.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:43:32.514: INFO: stderr: "I0409 21:43:32.426462 1958 log.go:172] (0xc000aea160) (0xc000ad00a0) Create stream\nI0409 21:43:32.426528 1958 log.go:172] (0xc000aea160) (0xc000ad00a0) Stream added, broadcasting: 1\nI0409 21:43:32.429581 1958 log.go:172] (0xc000aea160) Reply frame received for 1\nI0409 21:43:32.429624 1958 log.go:172] (0xc000aea160) (0xc000651d60) Create stream\nI0409 21:43:32.429639 1958 log.go:172] (0xc000aea160) (0xc000651d60) Stream added, broadcasting: 3\nI0409 21:43:32.430495 1958 log.go:172] (0xc000aea160) Reply frame received for 3\nI0409 21:43:32.430529 1958 log.go:172] (0xc000aea160) (0xc000ad0140) Create stream\nI0409 21:43:32.430545 1958 log.go:172] (0xc000aea160) (0xc000ad0140) Stream added, broadcasting: 5\nI0409 21:43:32.431301 1958 log.go:172] (0xc000aea160) Reply frame received for 5\nI0409 21:43:32.507176 1958 log.go:172] (0xc000aea160) Data frame received for 5\nI0409 21:43:32.507209 1958 log.go:172] (0xc000ad0140) (5) Data frame handling\nI0409 21:43:32.507218 1958 log.go:172] (0xc000ad0140) (5) Data frame sent\nI0409 21:43:32.507225 1958 log.go:172] (0xc000aea160) Data frame received for 5\nI0409 21:43:32.507231 1958 log.go:172] (0xc000ad0140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:43:32.507249 1958 log.go:172] (0xc000aea160) Data frame received for 3\nI0409 21:43:32.507256 1958 log.go:172] (0xc000651d60) (3) Data frame handling\nI0409 21:43:32.507262 1958 log.go:172] (0xc000651d60) (3) Data frame sent\nI0409 21:43:32.507268 1958 log.go:172] (0xc000aea160) Data frame received for 3\nI0409 21:43:32.507274 1958 log.go:172] (0xc000651d60) (3) Data frame handling\nI0409 21:43:32.509435 1958 log.go:172] (0xc000aea160) Data frame received for 1\nI0409 21:43:32.509469 1958 log.go:172] (0xc000ad00a0) (1) Data frame handling\nI0409 21:43:32.509493 1958 log.go:172] (0xc000ad00a0) (1) Data frame sent\nI0409 21:43:32.509512 1958 log.go:172] (0xc000aea160) (0xc000ad00a0) Stream removed, broadcasting: 1\nI0409 21:43:32.509546 1958 log.go:172] (0xc000aea160) Go away received\nI0409 21:43:32.509890 1958 log.go:172] (0xc000aea160) (0xc000ad00a0) Stream removed, broadcasting: 1\nI0409 21:43:32.509913 1958 log.go:172] (0xc000aea160) (0xc000651d60) Stream removed, broadcasting: 3\nI0409 21:43:32.509925 1958 log.go:172] (0xc000aea160) (0xc000ad0140) Stream removed, broadcasting: 5\n" Apr 9 21:43:32.514: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:43:32.514: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:43:32.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:43:32.763: INFO: stderr: "I0409 21:43:32.640867 1978 log.go:172] (0xc0009d4000) (0xc0006546e0) Create stream\nI0409 21:43:32.640912 1978 log.go:172] (0xc0009d4000) (0xc0006546e0) Stream added, broadcasting: 1\nI0409 21:43:32.643116 1978 log.go:172] (0xc0009d4000) Reply frame received for 1\nI0409 21:43:32.643163 1978 log.go:172] (0xc0009d4000) (0xc0002a74a0) Create stream\nI0409 21:43:32.643178 1978 log.go:172] (0xc0009d4000) (0xc0002a74a0) Stream added, broadcasting: 3\nI0409 21:43:32.644262 1978 log.go:172] (0xc0009d4000) Reply frame received for 3\nI0409 21:43:32.644290 1978 log.go:172] (0xc0009d4000) (0xc00067dae0) Create stream\nI0409 21:43:32.644301 1978 log.go:172] (0xc0009d4000) (0xc00067dae0) Stream added, broadcasting: 5\nI0409 21:43:32.645032 1978 log.go:172] (0xc0009d4000) Reply frame received for 5\nI0409 21:43:32.717518 1978 log.go:172] (0xc0009d4000) Data frame received for 5\nI0409 21:43:32.717556 1978 log.go:172] (0xc00067dae0) (5) Data frame handling\nI0409 21:43:32.717588 1978 log.go:172] (0xc00067dae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:43:32.756458 1978 log.go:172] (0xc0009d4000) Data frame received for 3\nI0409 21:43:32.756486 1978 log.go:172] (0xc0002a74a0) (3) Data frame handling\nI0409 21:43:32.756508 1978 log.go:172] (0xc0002a74a0) (3) Data frame sent\nI0409 21:43:32.756515 1978 log.go:172] (0xc0009d4000) Data frame received for 3\nI0409 21:43:32.756522 1978 log.go:172] (0xc0002a74a0) (3) Data frame handling\nI0409 21:43:32.756850 1978 log.go:172] (0xc0009d4000) Data frame received for 5\nI0409 21:43:32.756879 1978 log.go:172] (0xc00067dae0) (5) Data frame handling\nI0409 21:43:32.758967 1978 log.go:172] (0xc0009d4000) Data frame received for 1\nI0409 21:43:32.758984 1978 log.go:172] (0xc0006546e0) (1) Data frame handling\nI0409 21:43:32.758997 1978 log.go:172] (0xc0006546e0) (1) Data frame sent\nI0409 21:43:32.759007 1978 log.go:172] (0xc0009d4000) (0xc0006546e0) Stream removed, broadcasting: 1\nI0409 21:43:32.759137 1978 log.go:172] (0xc0009d4000) Go away received\nI0409 21:43:32.759322 1978 log.go:172] (0xc0009d4000) (0xc0006546e0) Stream removed, broadcasting: 1\nI0409 21:43:32.759339 1978 log.go:172] (0xc0009d4000) (0xc0002a74a0) Stream removed, broadcasting: 3\nI0409 21:43:32.759346 1978 log.go:172] (0xc0009d4000) (0xc00067dae0) Stream removed, broadcasting: 5\n" Apr 9 21:43:32.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:43:32.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:43:32.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:43:33.011: INFO: stderr: "I0409 21:43:32.902504 1999 log.go:172] (0xc0009c0420) (0xc000a5e500) Create stream\nI0409 21:43:32.902553 1999 log.go:172] (0xc0009c0420) (0xc000a5e500) Stream added, broadcasting: 1\nI0409 21:43:32.908421 1999 log.go:172] (0xc0009c0420) Reply frame received for 1\nI0409 21:43:32.908461 1999 log.go:172] (0xc0009c0420) (0xc0005ae6e0) Create stream\nI0409 21:43:32.908472 1999 log.go:172] (0xc0009c0420) (0xc0005ae6e0) Stream added, broadcasting: 3\nI0409 21:43:32.909571 1999 log.go:172] (0xc0009c0420) Reply frame received for 3\nI0409 21:43:32.909616 1999 log.go:172] (0xc0009c0420) (0xc000528000) Create stream\nI0409 21:43:32.909632 1999 log.go:172] (0xc0009c0420) (0xc000528000) Stream added, broadcasting: 5\nI0409 21:43:32.910566 1999 log.go:172] (0xc0009c0420) Reply frame received for 5\nI0409 21:43:32.978001 1999 log.go:172] (0xc0009c0420) Data frame received for 5\nI0409 21:43:32.978028 1999 log.go:172] (0xc000528000) (5) Data frame handling\nI0409 21:43:32.978053 1999 log.go:172] (0xc000528000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:43:33.002417 1999 log.go:172] (0xc0009c0420) Data frame received for 3\nI0409 21:43:33.002466 1999 log.go:172] (0xc0005ae6e0) (3) Data frame handling\nI0409 21:43:33.002495 1999 log.go:172] (0xc0005ae6e0) (3) Data frame sent\nI0409 21:43:33.002517 1999 log.go:172] (0xc0009c0420) Data frame received for 3\nI0409 21:43:33.002553 1999 log.go:172] (0xc0009c0420) Data frame received for 5\nI0409 21:43:33.002578 1999 log.go:172] (0xc000528000) (5) Data frame handling\nI0409 21:43:33.002601 1999 log.go:172] (0xc0005ae6e0) (3) Data frame handling\nI0409 21:43:33.006051 1999 log.go:172] (0xc0009c0420) Data frame received for 1\nI0409 21:43:33.006169 1999 log.go:172] (0xc000a5e500) (1) Data frame handling\nI0409 21:43:33.006271 1999 log.go:172] (0xc000a5e500) (1) Data frame sent\nI0409 21:43:33.006303 1999 log.go:172] (0xc0009c0420) (0xc000a5e500) Stream removed, broadcasting: 1\nI0409 21:43:33.006340 1999 log.go:172] (0xc0009c0420) Go away received\nI0409 21:43:33.006772 1999 log.go:172] (0xc0009c0420) (0xc000a5e500) Stream removed, broadcasting: 1\nI0409 21:43:33.006812 1999 log.go:172] (0xc0009c0420) (0xc0005ae6e0) Stream removed, broadcasting: 3\nI0409 21:43:33.006838 1999 log.go:172] (0xc0009c0420) (0xc000528000) Stream removed, broadcasting: 5\n" Apr 9 21:43:33.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:43:33.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:43:33.011: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:43:33.019: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 9 21:43:43.027: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:43:43.027: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:43:43.027: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 9 21:43:43.041: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:43.041: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC }] Apr 9 21:43:43.041: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:43.041: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:43.041: INFO: Apr 9 21:43:43.041: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 9 21:43:44.046: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:44.046: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC }] Apr 9 21:43:44.047: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:44.047: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:44.047: INFO: Apr 9 21:43:44.047: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 9 21:43:45.050: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:45.050: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC }] Apr 9 21:43:45.050: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:45.050: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:45.050: INFO: Apr 9 21:43:45.050: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 9 21:43:46.055: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:46.055: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:42:48 +0000 UTC }] Apr 9 21:43:46.055: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:46.055: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:46.055: INFO: Apr 9 21:43:46.055: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 9 21:43:47.060: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:47.060: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:47.060: INFO: Apr 9 21:43:47.060: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 9 21:43:48.065: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:48.065: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:48.065: INFO: Apr 9 21:43:48.065: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 9 21:43:49.069: INFO: POD NODE PHASE GRACE CONDITIONS Apr 9 21:43:49.069: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-09 21:43:11 +0000 UTC }] Apr 9 21:43:49.069: INFO: Apr 9 21:43:49.069: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 9 21:43:50.073: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.965085642s Apr 9 21:43:51.079: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.961024762s Apr 9 21:43:52.088: INFO: Verifying statefulset ss doesn't scale past 0 for another 954.923798ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4719 Apr 9 21:43:53.092: INFO: Scaling statefulset ss to 0 Apr 9 21:43:53.102: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 21:43:53.105: INFO: Deleting all statefulset in ns statefulset-4719 Apr 9 21:43:53.108: INFO: Scaling statefulset ss to 0 Apr 9 21:43:53.115: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:43:53.117: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:43:53.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4719" for this suite. • [SLOW TEST:64.452 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":139,"skipped":2192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:43:53.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c3ca6f2c-50ae-4209-82c0-a7ce41bee565 STEP: Creating secret with name s-test-opt-upd-01912f38-5d40-4d67-af89-2d0e4eb99b75 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c3ca6f2c-50ae-4209-82c0-a7ce41bee565 STEP: Updating secret s-test-opt-upd-01912f38-5d40-4d67-af89-2d0e4eb99b75 STEP: Creating secret with name s-test-opt-create-0f913142-8992-4280-9e65-d3527c59ef0a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:45:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5944" for this suite. • [SLOW TEST:78.611 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2217,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:45:11.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 9 21:45:16.332: INFO: Successfully updated pod "adopt-release-lbrf2" STEP: Checking that the Job readopts the Pod Apr 9 21:45:16.332: INFO: Waiting up to 15m0s for pod "adopt-release-lbrf2" in namespace "job-6239" to be "adopted" Apr 9 21:45:16.342: INFO: Pod "adopt-release-lbrf2": Phase="Running", Reason="", readiness=true. Elapsed: 9.581266ms Apr 9 21:45:18.440: INFO: Pod "adopt-release-lbrf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.107602238s Apr 9 21:45:18.440: INFO: Pod "adopt-release-lbrf2" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 9 21:45:18.948: INFO: Successfully updated pod "adopt-release-lbrf2" STEP: Checking that the Job releases the Pod Apr 9 21:45:18.948: INFO: Waiting up to 15m0s for pod "adopt-release-lbrf2" in namespace "job-6239" to be "released" Apr 9 21:45:18.953: INFO: Pod "adopt-release-lbrf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.358584ms Apr 9 21:45:20.957: INFO: Pod "adopt-release-lbrf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008668218s Apr 9 21:45:20.957: INFO: Pod "adopt-release-lbrf2" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:45:20.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6239" for this suite. • [SLOW TEST:9.198 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":141,"skipped":2218,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:45:20.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-08fe1f5e-19c0-4a67-8760-8ff52639dcaa STEP: Creating a pod to test consume configMaps Apr 9 21:45:21.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9" in namespace "configmap-7925" to be "success or failure" Apr 9 21:45:21.202: INFO: Pod "pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363075ms Apr 9 21:45:23.211: INFO: Pod "pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011342292s Apr 9 21:45:25.216: INFO: Pod "pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015835058s STEP: Saw pod success Apr 9 21:45:25.216: INFO: Pod "pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9" satisfied condition "success or failure" Apr 9 21:45:25.219: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9 container configmap-volume-test: STEP: delete the pod Apr 9 21:45:25.276: INFO: Waiting for pod pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9 to disappear Apr 9 21:45:25.282: INFO: Pod pod-configmaps-b6340b8a-5699-462f-a2a6-777c6001acd9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:45:25.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7925" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2229,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:45:25.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 9 21:45:25.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:25.432: INFO: Number of nodes with available pods: 0 Apr 9 21:45:25.432: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:26.513: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:26.522: INFO: Number of nodes with available pods: 0 Apr 9 21:45:26.522: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:27.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:27.440: INFO: Number of nodes with available pods: 0 Apr 9 21:45:27.440: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:28.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:28.440: INFO: Number of nodes with available pods: 1 Apr 9 21:45:28.440: INFO: Node jerma-worker2 is running more than one daemon pod Apr 9 21:45:29.436: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:29.438: INFO: Number of nodes with available pods: 2 Apr 9 21:45:29.438: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 9 21:45:29.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:29.452: INFO: Number of nodes with available pods: 1 Apr 9 21:45:29.452: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:30.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:30.618: INFO: Number of nodes with available pods: 1 Apr 9 21:45:30.618: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:31.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:31.459: INFO: Number of nodes with available pods: 1 Apr 9 21:45:31.459: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:32.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:32.461: INFO: Number of nodes with available pods: 1 Apr 9 21:45:32.461: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:33.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:33.462: INFO: Number of nodes with available pods: 1 Apr 9 21:45:33.462: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:34.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:34.460: INFO: Number of nodes with available pods: 1 Apr 9 21:45:34.460: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:35.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:35.461: INFO: Number of nodes with available pods: 1 Apr 9 21:45:35.461: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:36.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:36.460: INFO: Number of nodes with available pods: 1 Apr 9 21:45:36.460: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:37.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:37.462: INFO: Number of nodes with available pods: 1 Apr 9 21:45:37.462: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:38.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:38.482: INFO: Number of nodes with available pods: 1 Apr 9 21:45:38.482: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:39.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:39.462: INFO: Number of nodes with available pods: 1 Apr 9 21:45:39.462: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:40.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:40.461: INFO: Number of nodes with available pods: 1 Apr 9 21:45:40.461: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:41.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:41.495: INFO: Number of nodes with available pods: 1 Apr 9 21:45:41.495: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:45:42.459: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:45:42.462: INFO: Number of nodes with available pods: 2 Apr 9 21:45:42.462: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3470, will wait for the garbage collector to delete the pods Apr 9 21:45:42.525: INFO: Deleting DaemonSet.extensions daemon-set took: 6.545277ms Apr 9 21:45:42.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257477ms Apr 9 21:45:49.528: INFO: Number of nodes with available pods: 0 Apr 9 21:45:49.528: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 21:45:49.531: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3470/daemonsets","resourceVersion":"6778339"},"items":null} Apr 9 21:45:49.533: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3470/pods","resourceVersion":"6778339"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:45:49.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3470" for this suite. • [SLOW TEST:24.257 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":143,"skipped":2243,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:45:49.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-kshz STEP: Creating a pod to test atomic-volume-subpath Apr 9 21:45:49.656: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kshz" in namespace "subpath-1549" to be "success or failure" Apr 9 21:45:49.676: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.336578ms Apr 9 21:45:51.680: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023474521s Apr 9 21:45:53.684: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 4.027686728s Apr 9 21:45:55.688: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 6.032159733s Apr 9 21:45:57.693: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 8.036465724s Apr 9 21:45:59.696: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 10.040143669s Apr 9 21:46:01.701: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 12.04443167s Apr 9 21:46:03.704: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 14.047991499s Apr 9 21:46:05.709: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 16.052189024s Apr 9 21:46:07.713: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 18.056210287s Apr 9 21:46:09.716: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 20.059642454s Apr 9 21:46:11.720: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Running", Reason="", readiness=true. Elapsed: 22.063706866s Apr 9 21:46:13.724: INFO: Pod "pod-subpath-test-configmap-kshz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068045671s STEP: Saw pod success Apr 9 21:46:13.724: INFO: Pod "pod-subpath-test-configmap-kshz" satisfied condition "success or failure" Apr 9 21:46:13.728: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-kshz container test-container-subpath-configmap-kshz: STEP: delete the pod Apr 9 21:46:13.789: INFO: Waiting for pod pod-subpath-test-configmap-kshz to disappear Apr 9 21:46:13.798: INFO: Pod pod-subpath-test-configmap-kshz no longer exists STEP: Deleting pod pod-subpath-test-configmap-kshz Apr 9 21:46:13.798: INFO: Deleting pod "pod-subpath-test-configmap-kshz" in namespace "subpath-1549" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:46:13.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1549" for this suite. • [SLOW TEST:24.255 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":144,"skipped":2246,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:46:13.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 9 21:46:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1557' Apr 9 21:46:14.156: INFO: stderr: "" Apr 9 21:46:14.156: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 21:46:14.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:14.255: INFO: stderr: "" Apr 9 21:46:14.255: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-mgr75 " Apr 9 21:46:14.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:14.349: INFO: stderr: "" Apr 9 21:46:14.349: INFO: stdout: "" Apr 9 21:46:14.349: INFO: update-demo-nautilus-cgfnr is created but not running Apr 9 21:46:19.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:19.450: INFO: stderr: "" Apr 9 21:46:19.450: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-mgr75 " Apr 9 21:46:19.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:19.550: INFO: stderr: "" Apr 9 21:46:19.550: INFO: stdout: "true" Apr 9 21:46:19.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:19.650: INFO: stderr: "" Apr 9 21:46:19.650: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:19.650: INFO: validating pod update-demo-nautilus-cgfnr Apr 9 21:46:19.655: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:19.655: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:19.655: INFO: update-demo-nautilus-cgfnr is verified up and running Apr 9 21:46:19.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgr75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:19.748: INFO: stderr: "" Apr 9 21:46:19.748: INFO: stdout: "true" Apr 9 21:46:19.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgr75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:19.841: INFO: stderr: "" Apr 9 21:46:19.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:19.841: INFO: validating pod update-demo-nautilus-mgr75 Apr 9 21:46:19.845: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:19.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:19.845: INFO: update-demo-nautilus-mgr75 is verified up and running STEP: scaling down the replication controller Apr 9 21:46:19.848: INFO: scanned /root for discovery docs: Apr 9 21:46:19.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1557' Apr 9 21:46:20.979: INFO: stderr: "" Apr 9 21:46:20.979: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 21:46:20.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:21.071: INFO: stderr: "" Apr 9 21:46:21.071: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-mgr75 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 9 21:46:26.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:26.168: INFO: stderr: "" Apr 9 21:46:26.168: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-mgr75 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 9 21:46:31.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:31.283: INFO: stderr: "" Apr 9 21:46:31.283: INFO: stdout: "update-demo-nautilus-cgfnr " Apr 9 21:46:31.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:31.384: INFO: stderr: "" Apr 9 21:46:31.384: INFO: stdout: "true" Apr 9 21:46:31.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:31.489: INFO: stderr: "" Apr 9 21:46:31.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:31.489: INFO: validating pod update-demo-nautilus-cgfnr Apr 9 21:46:31.492: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:31.492: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:31.492: INFO: update-demo-nautilus-cgfnr is verified up and running STEP: scaling up the replication controller Apr 9 21:46:31.494: INFO: scanned /root for discovery docs: Apr 9 21:46:31.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1557' Apr 9 21:46:32.663: INFO: stderr: "" Apr 9 21:46:32.663: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 21:46:32.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:32.785: INFO: stderr: "" Apr 9 21:46:32.785: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-vqcgt " Apr 9 21:46:32.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:32.957: INFO: stderr: "" Apr 9 21:46:32.957: INFO: stdout: "true" Apr 9 21:46:32.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:33.051: INFO: stderr: "" Apr 9 21:46:33.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:33.051: INFO: validating pod update-demo-nautilus-cgfnr Apr 9 21:46:33.054: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:33.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:33.054: INFO: update-demo-nautilus-cgfnr is verified up and running Apr 9 21:46:33.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqcgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:33.151: INFO: stderr: "" Apr 9 21:46:33.151: INFO: stdout: "" Apr 9 21:46:33.151: INFO: update-demo-nautilus-vqcgt is created but not running Apr 9 21:46:38.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1557' Apr 9 21:46:38.247: INFO: stderr: "" Apr 9 21:46:38.247: INFO: stdout: "update-demo-nautilus-cgfnr update-demo-nautilus-vqcgt " Apr 9 21:46:38.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:38.341: INFO: stderr: "" Apr 9 21:46:38.341: INFO: stdout: "true" Apr 9 21:46:38.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgfnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:38.448: INFO: stderr: "" Apr 9 21:46:38.448: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:38.448: INFO: validating pod update-demo-nautilus-cgfnr Apr 9 21:46:38.452: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:38.452: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:38.452: INFO: update-demo-nautilus-cgfnr is verified up and running Apr 9 21:46:38.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqcgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:38.556: INFO: stderr: "" Apr 9 21:46:38.556: INFO: stdout: "true" Apr 9 21:46:38.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqcgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1557' Apr 9 21:46:38.662: INFO: stderr: "" Apr 9 21:46:38.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 21:46:38.662: INFO: validating pod update-demo-nautilus-vqcgt Apr 9 21:46:38.666: INFO: got data: { "image": "nautilus.jpg" } Apr 9 21:46:38.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 21:46:38.666: INFO: update-demo-nautilus-vqcgt is verified up and running STEP: using delete to clean up resources Apr 9 21:46:38.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1557' Apr 9 21:46:38.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:46:38.798: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 9 21:46:38.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1557' Apr 9 21:46:38.891: INFO: stderr: "No resources found in kubectl-1557 namespace.\n" Apr 9 21:46:38.891: INFO: stdout: "" Apr 9 21:46:38.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1557 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 21:46:38.988: INFO: stderr: "" Apr 9 21:46:38.988: INFO: stdout: "update-demo-nautilus-cgfnr\nupdate-demo-nautilus-vqcgt\n" Apr 9 21:46:39.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1557' Apr 9 21:46:39.590: INFO: stderr: "No resources found in kubectl-1557 namespace.\n" Apr 9 21:46:39.591: INFO: stdout: "" Apr 9 21:46:39.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1557 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 21:46:39.724: INFO: stderr: "" Apr 9 21:46:39.724: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:46:39.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1557" for this suite. • [SLOW TEST:25.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":145,"skipped":2251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:46:39.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 9 21:46:40.006: INFO: Waiting up to 5m0s for pod "pod-72185c4b-421d-4840-aafd-ed53748e56c6" in namespace "emptydir-5877" to be "success or failure" Apr 9 21:46:40.008: INFO: Pod "pod-72185c4b-421d-4840-aafd-ed53748e56c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.738868ms Apr 9 21:46:42.013: INFO: Pod "pod-72185c4b-421d-4840-aafd-ed53748e56c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006931474s Apr 9 21:46:44.017: INFO: Pod "pod-72185c4b-421d-4840-aafd-ed53748e56c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011432172s STEP: Saw pod success Apr 9 21:46:44.017: INFO: Pod "pod-72185c4b-421d-4840-aafd-ed53748e56c6" satisfied condition "success or failure" Apr 9 21:46:44.020: INFO: Trying to get logs from node jerma-worker2 pod pod-72185c4b-421d-4840-aafd-ed53748e56c6 container test-container: STEP: delete the pod Apr 9 21:46:44.065: INFO: Waiting for pod pod-72185c4b-421d-4840-aafd-ed53748e56c6 to disappear Apr 9 21:46:44.117: INFO: Pod pod-72185c4b-421d-4840-aafd-ed53748e56c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:46:44.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5877" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2298,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:46:44.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:46:48.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1735" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2310,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:46:48.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 9 21:46:52.360: INFO: Pod pod-hostip-5375e1af-9add-4185-9774-0934f2c9673e has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:46:52.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9876" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:46:52.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b6d91d5e-7a1b-45c3-9a25-3c05205b8fec in namespace container-probe-9665 Apr 9 21:46:56.463: INFO: Started pod liveness-b6d91d5e-7a1b-45c3-9a25-3c05205b8fec in namespace container-probe-9665 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 21:46:56.467: INFO: Initial restart count of pod liveness-b6d91d5e-7a1b-45c3-9a25-3c05205b8fec is 0 Apr 9 21:47:20.519: INFO: Restart count of pod container-probe-9665/liveness-b6d91d5e-7a1b-45c3-9a25-3c05205b8fec is now 1 (24.052338754s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:47:20.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9665" for this suite. • [SLOW TEST:28.174 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2365,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:47:20.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7440a8ff-2e9c-429b-93f4-230315e349c0 STEP: Creating a pod to test consume secrets Apr 9 21:47:21.042: INFO: Waiting up to 5m0s for pod "pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e" in namespace "secrets-6218" to be "success or failure" Apr 9 21:47:21.051: INFO: Pod "pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.977781ms Apr 9 21:47:23.056: INFO: Pod "pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01343049s Apr 9 21:47:25.060: INFO: Pod "pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017076658s STEP: Saw pod success Apr 9 21:47:25.060: INFO: Pod "pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e" satisfied condition "success or failure" Apr 9 21:47:25.062: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e container secret-volume-test: STEP: delete the pod Apr 9 21:47:25.156: INFO: Waiting for pod pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e to disappear Apr 9 21:47:25.177: INFO: Pod pod-secrets-7d897d60-b99e-4988-b3f6-eee761b1396e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:47:25.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6218" for this suite. STEP: Destroying namespace "secret-namespace-6960" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:47:25.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 9 21:47:25.785: INFO: created pod pod-service-account-defaultsa Apr 9 21:47:25.785: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 9 21:47:25.794: INFO: created pod pod-service-account-mountsa Apr 9 21:47:25.794: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 9 21:47:25.823: INFO: created pod pod-service-account-nomountsa Apr 9 21:47:25.823: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 9 21:47:25.836: INFO: created pod pod-service-account-defaultsa-mountspec Apr 9 21:47:25.836: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 9 21:47:25.859: INFO: created pod pod-service-account-mountsa-mountspec Apr 9 21:47:25.859: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 9 21:47:25.903: INFO: created pod pod-service-account-nomountsa-mountspec Apr 9 21:47:25.903: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 9 21:47:25.911: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 9 21:47:25.911: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 9 21:47:25.964: INFO: created pod pod-service-account-mountsa-nomountspec Apr 9 21:47:25.964: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 9 21:47:25.996: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 9 21:47:25.996: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:47:25.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8584" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":151,"skipped":2401,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:47:26.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4c466426-06e3-4523-8a7b-e7d0e5b74db7 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:47:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6005" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":152,"skipped":2404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:47:26.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3823 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 9 21:47:26.386: INFO: Found 0 stateful pods, waiting for 3 Apr 9 21:47:36.407: INFO: Found 2 stateful pods, waiting for 3 Apr 9 21:47:46.391: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:47:46.391: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:47:46.391: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 9 21:47:46.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3823 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:47:46.675: INFO: stderr: "I0409 21:47:46.534811 2629 log.go:172] (0xc000104b00) (0xc000b961e0) Create stream\nI0409 21:47:46.534872 2629 log.go:172] (0xc000104b00) (0xc000b961e0) Stream added, broadcasting: 1\nI0409 21:47:46.538062 2629 log.go:172] (0xc000104b00) Reply frame received for 1\nI0409 21:47:46.538091 2629 log.go:172] (0xc000104b00) (0xc0006e3b80) Create stream\nI0409 21:47:46.538099 2629 log.go:172] (0xc000104b00) (0xc0006e3b80) Stream added, broadcasting: 3\nI0409 21:47:46.538978 2629 log.go:172] (0xc000104b00) Reply frame received for 3\nI0409 21:47:46.539005 2629 log.go:172] (0xc000104b00) (0xc000b96280) Create stream\nI0409 21:47:46.539013 2629 log.go:172] (0xc000104b00) (0xc000b96280) Stream added, broadcasting: 5\nI0409 21:47:46.539850 2629 log.go:172] (0xc000104b00) Reply frame received for 5\nI0409 21:47:46.615619 2629 log.go:172] (0xc000104b00) Data frame received for 5\nI0409 21:47:46.615647 2629 log.go:172] (0xc000b96280) (5) Data frame handling\nI0409 21:47:46.615663 2629 log.go:172] (0xc000b96280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:47:46.666810 2629 log.go:172] (0xc000104b00) Data frame received for 3\nI0409 21:47:46.666843 2629 log.go:172] (0xc0006e3b80) (3) Data frame handling\nI0409 21:47:46.666874 2629 log.go:172] (0xc0006e3b80) (3) Data frame sent\nI0409 21:47:46.666899 2629 log.go:172] (0xc000104b00) Data frame received for 3\nI0409 21:47:46.666920 2629 log.go:172] (0xc0006e3b80) (3) Data frame handling\nI0409 21:47:46.667111 2629 log.go:172] (0xc000104b00) Data frame received for 5\nI0409 21:47:46.667142 2629 log.go:172] (0xc000b96280) (5) Data frame handling\nI0409 21:47:46.669382 2629 log.go:172] (0xc000104b00) Data frame received for 1\nI0409 21:47:46.669414 2629 log.go:172] (0xc000b961e0) (1) Data frame handling\nI0409 21:47:46.669436 2629 log.go:172] (0xc000b961e0) (1) Data frame sent\nI0409 21:47:46.669462 2629 log.go:172] (0xc000104b00) (0xc000b961e0) Stream removed, broadcasting: 1\nI0409 21:47:46.669900 2629 log.go:172] (0xc000104b00) (0xc000b961e0) Stream removed, broadcasting: 1\nI0409 21:47:46.669926 2629 log.go:172] (0xc000104b00) (0xc0006e3b80) Stream removed, broadcasting: 3\nI0409 21:47:46.670105 2629 log.go:172] (0xc000104b00) (0xc000b96280) Stream removed, broadcasting: 5\n" Apr 9 21:47:46.675: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:47:46.675: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 9 21:47:56.767: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 9 21:48:06.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3823 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:48:07.070: INFO: stderr: "I0409 21:48:06.983122 2649 log.go:172] (0xc0003bea50) (0xc000628140) Create stream\nI0409 21:48:06.983197 2649 log.go:172] (0xc0003bea50) (0xc000628140) Stream added, broadcasting: 1\nI0409 21:48:06.986497 2649 log.go:172] (0xc0003bea50) Reply frame received for 1\nI0409 21:48:06.986536 2649 log.go:172] (0xc0003bea50) (0xc00066f9a0) Create stream\nI0409 21:48:06.986549 2649 log.go:172] (0xc0003bea50) (0xc00066f9a0) Stream added, broadcasting: 3\nI0409 21:48:06.987442 2649 log.go:172] (0xc0003bea50) Reply frame received for 3\nI0409 21:48:06.987479 2649 log.go:172] (0xc0003bea50) (0xc000628280) Create stream\nI0409 21:48:06.987491 2649 log.go:172] (0xc0003bea50) (0xc000628280) Stream added, broadcasting: 5\nI0409 21:48:06.988281 2649 log.go:172] (0xc0003bea50) Reply frame received for 5\nI0409 21:48:07.057330 2649 log.go:172] (0xc0003bea50) Data frame received for 5\nI0409 21:48:07.057368 2649 log.go:172] (0xc000628280) (5) Data frame handling\nI0409 21:48:07.057390 2649 log.go:172] (0xc000628280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:48:07.064429 2649 log.go:172] (0xc0003bea50) Data frame received for 3\nI0409 21:48:07.064467 2649 log.go:172] (0xc00066f9a0) (3) Data frame handling\nI0409 21:48:07.064495 2649 log.go:172] (0xc00066f9a0) (3) Data frame sent\nI0409 21:48:07.064563 2649 log.go:172] (0xc0003bea50) Data frame received for 3\nI0409 21:48:07.064591 2649 log.go:172] (0xc00066f9a0) (3) Data frame handling\nI0409 21:48:07.064737 2649 log.go:172] (0xc0003bea50) Data frame received for 5\nI0409 21:48:07.064761 2649 log.go:172] (0xc000628280) (5) Data frame handling\nI0409 21:48:07.066490 2649 log.go:172] (0xc0003bea50) Data frame received for 1\nI0409 21:48:07.066508 2649 log.go:172] (0xc000628140) (1) Data frame handling\nI0409 21:48:07.066522 2649 log.go:172] (0xc000628140) (1) Data frame sent\nI0409 21:48:07.066536 2649 log.go:172] (0xc0003bea50) (0xc000628140) Stream removed, broadcasting: 1\nI0409 21:48:07.066569 2649 log.go:172] (0xc0003bea50) Go away received\nI0409 21:48:07.066890 2649 log.go:172] (0xc0003bea50) (0xc000628140) Stream removed, broadcasting: 1\nI0409 21:48:07.066916 2649 log.go:172] (0xc0003bea50) (0xc00066f9a0) Stream removed, broadcasting: 3\nI0409 21:48:07.066932 2649 log.go:172] (0xc0003bea50) (0xc000628280) Stream removed, broadcasting: 5\n" Apr 9 21:48:07.070: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:48:07.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:48:27.119: INFO: Waiting for StatefulSet statefulset-3823/ss2 to complete update Apr 9 21:48:27.119: INFO: Waiting for Pod statefulset-3823/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 9 21:48:37.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3823 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 21:48:37.386: INFO: stderr: "I0409 21:48:37.259186 2673 log.go:172] (0xc000696b00) (0xc00061e000) Create stream\nI0409 21:48:37.259258 2673 log.go:172] (0xc000696b00) (0xc00061e000) Stream added, broadcasting: 1\nI0409 21:48:37.262007 2673 log.go:172] (0xc000696b00) Reply frame received for 1\nI0409 21:48:37.262048 2673 log.go:172] (0xc000696b00) (0xc000735900) Create stream\nI0409 21:48:37.262061 2673 log.go:172] (0xc000696b00) (0xc000735900) Stream added, broadcasting: 3\nI0409 21:48:37.262936 2673 log.go:172] (0xc000696b00) Reply frame received for 3\nI0409 21:48:37.262991 2673 log.go:172] (0xc000696b00) (0xc000418000) Create stream\nI0409 21:48:37.263012 2673 log.go:172] (0xc000696b00) (0xc000418000) Stream added, broadcasting: 5\nI0409 21:48:37.263763 2673 log.go:172] (0xc000696b00) Reply frame received for 5\nI0409 21:48:37.351923 2673 log.go:172] (0xc000696b00) Data frame received for 5\nI0409 21:48:37.351960 2673 log.go:172] (0xc000418000) (5) Data frame handling\nI0409 21:48:37.351987 2673 log.go:172] (0xc000418000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 21:48:37.380469 2673 log.go:172] (0xc000696b00) Data frame received for 5\nI0409 21:48:37.380509 2673 log.go:172] (0xc000418000) (5) Data frame handling\nI0409 21:48:37.380529 2673 log.go:172] (0xc000696b00) Data frame received for 3\nI0409 21:48:37.380536 2673 log.go:172] (0xc000735900) (3) Data frame handling\nI0409 21:48:37.380545 2673 log.go:172] (0xc000735900) (3) Data frame sent\nI0409 21:48:37.380820 2673 log.go:172] (0xc000696b00) Data frame received for 3\nI0409 21:48:37.380832 2673 log.go:172] (0xc000735900) (3) Data frame handling\nI0409 21:48:37.382781 2673 log.go:172] (0xc000696b00) Data frame received for 1\nI0409 21:48:37.382794 2673 log.go:172] (0xc00061e000) (1) Data frame handling\nI0409 21:48:37.382805 2673 log.go:172] (0xc00061e000) (1) Data frame sent\nI0409 21:48:37.382815 2673 log.go:172] (0xc000696b00) (0xc00061e000) Stream removed, broadcasting: 1\nI0409 21:48:37.382981 2673 log.go:172] (0xc000696b00) Go away received\nI0409 21:48:37.383125 2673 log.go:172] (0xc000696b00) (0xc00061e000) Stream removed, broadcasting: 1\nI0409 21:48:37.383141 2673 log.go:172] (0xc000696b00) (0xc000735900) Stream removed, broadcasting: 3\nI0409 21:48:37.383147 2673 log.go:172] (0xc000696b00) (0xc000418000) Stream removed, broadcasting: 5\n" Apr 9 21:48:37.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 21:48:37.387: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 21:48:47.415: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 9 21:48:57.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3823 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 21:48:57.637: INFO: stderr: "I0409 21:48:57.574814 2696 log.go:172] (0xc0005a8fd0) (0xc00086e000) Create stream\nI0409 21:48:57.574866 2696 log.go:172] (0xc0005a8fd0) (0xc00086e000) Stream added, broadcasting: 1\nI0409 21:48:57.577293 2696 log.go:172] (0xc0005a8fd0) Reply frame received for 1\nI0409 21:48:57.577347 2696 log.go:172] (0xc0005a8fd0) (0xc0005fda40) Create stream\nI0409 21:48:57.577362 2696 log.go:172] (0xc0005a8fd0) (0xc0005fda40) Stream added, broadcasting: 3\nI0409 21:48:57.578158 2696 log.go:172] (0xc0005a8fd0) Reply frame received for 3\nI0409 21:48:57.578183 2696 log.go:172] (0xc0005a8fd0) (0xc00086e0a0) Create stream\nI0409 21:48:57.578191 2696 log.go:172] (0xc0005a8fd0) (0xc00086e0a0) Stream added, broadcasting: 5\nI0409 21:48:57.578941 2696 log.go:172] (0xc0005a8fd0) Reply frame received for 5\nI0409 21:48:57.630546 2696 log.go:172] (0xc0005a8fd0) Data frame received for 5\nI0409 21:48:57.630579 2696 log.go:172] (0xc00086e0a0) (5) Data frame handling\nI0409 21:48:57.630591 2696 log.go:172] (0xc00086e0a0) (5) Data frame sent\nI0409 21:48:57.630601 2696 log.go:172] (0xc0005a8fd0) Data frame received for 5\nI0409 21:48:57.630614 2696 log.go:172] (0xc00086e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 21:48:57.630677 2696 log.go:172] (0xc0005a8fd0) Data frame received for 3\nI0409 21:48:57.630717 2696 log.go:172] (0xc0005fda40) (3) Data frame handling\nI0409 21:48:57.630751 2696 log.go:172] (0xc0005fda40) (3) Data frame sent\nI0409 21:48:57.630771 2696 log.go:172] (0xc0005a8fd0) Data frame received for 3\nI0409 21:48:57.630784 2696 log.go:172] (0xc0005fda40) (3) Data frame handling\nI0409 21:48:57.632451 2696 log.go:172] (0xc0005a8fd0) Data frame received for 1\nI0409 21:48:57.632483 2696 log.go:172] (0xc00086e000) (1) Data frame handling\nI0409 21:48:57.632497 2696 log.go:172] (0xc00086e000) (1) Data frame sent\nI0409 21:48:57.632512 2696 log.go:172] (0xc0005a8fd0) (0xc00086e000) Stream removed, broadcasting: 1\nI0409 21:48:57.632533 2696 log.go:172] (0xc0005a8fd0) Go away received\nI0409 21:48:57.632971 2696 log.go:172] (0xc0005a8fd0) (0xc00086e000) Stream removed, broadcasting: 1\nI0409 21:48:57.632992 2696 log.go:172] (0xc0005a8fd0) (0xc0005fda40) Stream removed, broadcasting: 3\nI0409 21:48:57.633005 2696 log.go:172] (0xc0005a8fd0) (0xc00086e0a0) Stream removed, broadcasting: 5\n" Apr 9 21:48:57.637: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 21:48:57.637: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 21:49:07.656: INFO: Waiting for StatefulSet statefulset-3823/ss2 to complete update Apr 9 21:49:07.656: INFO: Waiting for Pod statefulset-3823/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 9 21:49:07.656: INFO: Waiting for Pod statefulset-3823/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 9 21:49:07.656: INFO: Waiting for Pod statefulset-3823/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 9 21:49:17.663: INFO: Waiting for StatefulSet statefulset-3823/ss2 to complete update Apr 9 21:49:17.663: INFO: Waiting for Pod statefulset-3823/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 9 21:49:17.663: INFO: Waiting for Pod statefulset-3823/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 9 21:49:27.674: INFO: Waiting for StatefulSet statefulset-3823/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 21:49:37.664: INFO: Deleting all statefulset in ns statefulset-3823 Apr 9 21:49:37.667: INFO: Scaling statefulset ss2 to 0 Apr 9 21:50:07.699: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:50:07.702: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:50:07.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3823" for this suite. • [SLOW TEST:161.444 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":153,"skipped":2429,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:50:07.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 21:50:08.209: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 21:50:10.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065808, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065808, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065808, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065808, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 21:50:13.261: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:50:25.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1061" for this suite. STEP: Destroying namespace "webhook-1061-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.798 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":154,"skipped":2440,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:50:25.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 9 21:50:25.565: INFO: Waiting up to 5m0s for pod "var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72" in namespace "var-expansion-6194" to be "success or failure" Apr 9 21:50:25.588: INFO: Pod "var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72": Phase="Pending", Reason="", readiness=false. Elapsed: 23.348316ms Apr 9 21:50:27.618: INFO: Pod "var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052648645s Apr 9 21:50:29.622: INFO: Pod "var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056945669s STEP: Saw pod success Apr 9 21:50:29.622: INFO: Pod "var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72" satisfied condition "success or failure" Apr 9 21:50:29.625: INFO: Trying to get logs from node jerma-worker pod var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72 container dapi-container: STEP: delete the pod Apr 9 21:50:29.702: INFO: Waiting for pod var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72 to disappear Apr 9 21:50:29.708: INFO: Pod var-expansion-b6c085bc-32c2-44f5-aabb-323623f84b72 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:50:29.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6194" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:50:29.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:50:29.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a" in namespace "projected-255" to be "success or failure" Apr 9 21:50:29.762: INFO: Pod "downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.644391ms Apr 9 21:50:31.766: INFO: Pod "downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007351452s Apr 9 21:50:33.770: INFO: Pod "downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011893019s STEP: Saw pod success Apr 9 21:50:33.770: INFO: Pod "downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a" satisfied condition "success or failure" Apr 9 21:50:33.773: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a container client-container: STEP: delete the pod Apr 9 21:50:33.794: INFO: Waiting for pod downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a to disappear Apr 9 21:50:33.798: INFO: Pod downwardapi-volume-4bbe6b50-eea7-4b47-bee0-f3a56d275f5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:50:33.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-255" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:50:33.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-gzmz STEP: Creating a pod to test atomic-volume-subpath Apr 9 21:50:33.920: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gzmz" in namespace "subpath-2259" to be "success or failure" Apr 9 21:50:33.924: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.986718ms Apr 9 21:50:35.928: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007336747s Apr 9 21:50:37.932: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 4.011779398s Apr 9 21:50:39.936: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 6.015925619s Apr 9 21:50:41.941: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 8.020670686s Apr 9 21:50:43.945: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 10.024963856s Apr 9 21:50:45.950: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 12.029296908s Apr 9 21:50:47.953: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 14.033185328s Apr 9 21:50:49.958: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 16.037711895s Apr 9 21:50:51.962: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 18.041930108s Apr 9 21:50:53.967: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 20.0464139s Apr 9 21:50:55.971: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Running", Reason="", readiness=true. Elapsed: 22.050573924s Apr 9 21:50:57.975: INFO: Pod "pod-subpath-test-downwardapi-gzmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054935196s STEP: Saw pod success Apr 9 21:50:57.975: INFO: Pod "pod-subpath-test-downwardapi-gzmz" satisfied condition "success or failure" Apr 9 21:50:57.978: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-gzmz container test-container-subpath-downwardapi-gzmz: STEP: delete the pod Apr 9 21:50:58.022: INFO: Waiting for pod pod-subpath-test-downwardapi-gzmz to disappear Apr 9 21:50:58.040: INFO: Pod pod-subpath-test-downwardapi-gzmz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gzmz Apr 9 21:50:58.040: INFO: Deleting pod "pod-subpath-test-downwardapi-gzmz" in namespace "subpath-2259" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:50:58.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2259" for this suite. • [SLOW TEST:24.246 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":157,"skipped":2535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:50:58.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:50:58.155: INFO: Create a RollingUpdate DaemonSet Apr 9 21:50:58.158: INFO: Check that daemon pods launch on every node of the cluster Apr 9 21:50:58.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:50:58.234: INFO: Number of nodes with available pods: 0 Apr 9 21:50:58.234: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:50:59.239: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:50:59.242: INFO: Number of nodes with available pods: 0 Apr 9 21:50:59.242: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:51:00.239: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:51:00.242: INFO: Number of nodes with available pods: 0 Apr 9 21:51:00.242: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:51:01.239: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:51:01.243: INFO: Number of nodes with available pods: 0 Apr 9 21:51:01.243: INFO: Node jerma-worker is running more than one daemon pod Apr 9 21:51:02.240: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:51:02.244: INFO: Number of nodes with available pods: 2 Apr 9 21:51:02.244: INFO: Number of running nodes: 2, number of available pods: 2 Apr 9 21:51:02.244: INFO: Update the DaemonSet to trigger a rollout Apr 9 21:51:02.251: INFO: Updating DaemonSet daemon-set Apr 9 21:51:06.306: INFO: Roll back the DaemonSet before rollout is complete Apr 9 21:51:06.317: INFO: Updating DaemonSet daemon-set Apr 9 21:51:06.317: INFO: Make sure DaemonSet rollback is complete Apr 9 21:51:06.320: INFO: Wrong image for pod: daemon-set-kdgjr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 21:51:06.320: INFO: Pod daemon-set-kdgjr is not available Apr 9 21:51:06.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:51:07.330: INFO: Wrong image for pod: daemon-set-kdgjr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 21:51:07.330: INFO: Pod daemon-set-kdgjr is not available Apr 9 21:51:07.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 21:51:08.331: INFO: Pod daemon-set-9z6cx is not available Apr 9 21:51:08.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-716, will wait for the garbage collector to delete the pods Apr 9 21:51:08.402: INFO: Deleting DaemonSet.extensions daemon-set took: 7.326175ms Apr 9 21:51:08.802: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.317659ms Apr 9 21:51:19.505: INFO: Number of nodes with available pods: 0 Apr 9 21:51:19.505: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 21:51:19.508: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-716/daemonsets","resourceVersion":"6780292"},"items":null} Apr 9 21:51:19.511: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-716/pods","resourceVersion":"6780292"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:51:19.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-716" for this suite. • [SLOW TEST:21.502 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":158,"skipped":2559,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:51:19.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 9 21:51:19.612: INFO: Waiting up to 5m0s for pod "var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9" in namespace "var-expansion-5208" to be "success or failure" Apr 9 21:51:19.615: INFO: Pod "var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989729ms Apr 9 21:51:21.620: INFO: Pod "var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007894571s Apr 9 21:51:23.631: INFO: Pod "var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018841619s STEP: Saw pod success Apr 9 21:51:23.631: INFO: Pod "var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9" satisfied condition "success or failure" Apr 9 21:51:23.637: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9 container dapi-container: STEP: delete the pod Apr 9 21:51:23.667: INFO: Waiting for pod var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9 to disappear Apr 9 21:51:23.679: INFO: Pod var-expansion-527b49f6-25d4-49f6-8bbe-a9edb69ffad9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:51:23.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5208" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2559,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:51:23.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2769 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2769 Apr 9 21:51:23.763: INFO: Found 0 stateful pods, waiting for 1 Apr 9 21:51:33.768: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 21:51:33.788: INFO: Deleting all statefulset in ns statefulset-2769 Apr 9 21:51:33.805: INFO: Scaling statefulset ss to 0 Apr 9 21:51:43.888: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 21:51:43.891: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:51:43.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2769" for this suite. • [SLOW TEST:20.230 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":160,"skipped":2578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:51:43.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:51:55.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8408" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":161,"skipped":2610,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:51:55.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 9 21:51:55.118: INFO: >>> kubeConfig: /root/.kube/config Apr 9 21:51:57.056: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:52:07.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7178" for this suite. • [SLOW TEST:12.509 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":162,"skipped":2610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:52:07.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 9 21:52:07.636: INFO: PodSpec: initContainers in spec.initContainers Apr 9 21:52:53.906: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-27766371-2be8-4f98-9184-d4b5b87fa3d9", GenerateName:"", Namespace:"init-container-7172", SelfLink:"/api/v1/namespaces/init-container-7172/pods/pod-init-27766371-2be8-4f98-9184-d4b5b87fa3d9", UID:"3ea3c823-47f9-42de-97ef-f07956f2022d", ResourceVersion:"6780772", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722065927, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"636359494"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d8c4x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004e30900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8c4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8c4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8c4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00495a6e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002528540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00495a780)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00495a7b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00495a7b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00495a7bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065927, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065927, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065927, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722065927, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.244", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.244"}}, StartTime:(*v1.Time)(0xc0034b6840), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001dca150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001dca1c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b389488124bca2fbcf199240c5d912990d6570e723734adf5564df7be7f16203", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034b6880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034b6860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00495a84f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:52:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7172" for this suite. • [SLOW TEST:46.408 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":163,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:52:53.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0409 21:53:24.606832 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 21:53:24.606: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:24.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8598" for this suite. • [SLOW TEST:30.638 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":164,"skipped":2687,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:24.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:53:24.671: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 12.838524ms) Apr 9 21:53:24.686: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.729463ms) Apr 9 21:53:24.689: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.948123ms) Apr 9 21:53:24.692: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.853755ms) Apr 9 21:53:24.696: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.401121ms) Apr 9 21:53:24.699: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.367664ms) Apr 9 21:53:24.703: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.424259ms) Apr 9 21:53:24.706: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.058222ms) Apr 9 21:53:24.709: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.063525ms) Apr 9 21:53:24.712: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.395519ms) Apr 9 21:53:24.716: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.625173ms) Apr 9 21:53:24.719: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.4592ms) Apr 9 21:53:24.723: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.665023ms) Apr 9 21:53:24.727: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.395007ms) Apr 9 21:53:24.730: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.71127ms) Apr 9 21:53:24.734: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.629768ms) Apr 9 21:53:24.737: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.498591ms) Apr 9 21:53:24.741: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.532595ms) Apr 9 21:53:24.744: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.977554ms) Apr 9 21:53:24.748: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.494666ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:24.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8767" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":165,"skipped":2696,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:24.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:31.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9346" for this suite. • [SLOW TEST:7.098 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":166,"skipped":2697,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:31.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 9 21:53:31.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 9 21:53:34.407: INFO: stderr: "" Apr 9 21:53:34.407: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:34.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5396" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":167,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:34.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:53:34.494: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5237 I0409 21:53:34.508785 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5237, replica count: 1 I0409 21:53:35.559223 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 21:53:36.559461 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 21:53:37.559711 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 21:53:37.689: INFO: Created: latency-svc-qml5k Apr 9 21:53:37.717: INFO: Got endpoints: latency-svc-qml5k [57.341866ms] Apr 9 21:53:37.743: INFO: Created: latency-svc-kvbk4 Apr 9 21:53:37.758: INFO: Got endpoints: latency-svc-kvbk4 [40.878983ms] Apr 9 21:53:37.780: INFO: Created: latency-svc-f89qj Apr 9 21:53:37.797: INFO: Got endpoints: latency-svc-f89qj [80.263773ms] Apr 9 21:53:37.816: INFO: Created: latency-svc-96sb7 Apr 9 21:53:37.878: INFO: Got endpoints: latency-svc-96sb7 [161.249475ms] Apr 9 21:53:37.880: INFO: Created: latency-svc-qxtv8 Apr 9 21:53:37.896: INFO: Got endpoints: latency-svc-qxtv8 [178.460994ms] Apr 9 21:53:37.916: INFO: Created: latency-svc-4nfkd Apr 9 21:53:37.959: INFO: Got endpoints: latency-svc-4nfkd [241.650363ms] Apr 9 21:53:38.022: INFO: Created: latency-svc-xvv6n Apr 9 21:53:38.028: INFO: Got endpoints: latency-svc-xvv6n [310.644093ms] Apr 9 21:53:38.049: INFO: Created: latency-svc-hjw9b Apr 9 21:53:38.070: INFO: Got endpoints: latency-svc-hjw9b [353.27074ms] Apr 9 21:53:38.098: INFO: Created: latency-svc-mjtnq Apr 9 21:53:38.147: INFO: Got endpoints: latency-svc-mjtnq [429.914094ms] Apr 9 21:53:38.156: INFO: Created: latency-svc-29lvn Apr 9 21:53:38.172: INFO: Got endpoints: latency-svc-29lvn [455.360753ms] Apr 9 21:53:38.192: INFO: Created: latency-svc-kqtnq Apr 9 21:53:38.221: INFO: Got endpoints: latency-svc-kqtnq [503.946607ms] Apr 9 21:53:38.241: INFO: Created: latency-svc-mtvr2 Apr 9 21:53:38.280: INFO: Got endpoints: latency-svc-mtvr2 [560.737447ms] Apr 9 21:53:38.295: INFO: Created: latency-svc-rmm6g Apr 9 21:53:38.324: INFO: Got endpoints: latency-svc-rmm6g [604.492552ms] Apr 9 21:53:38.343: INFO: Created: latency-svc-ptn2l Apr 9 21:53:38.357: INFO: Got endpoints: latency-svc-ptn2l [640.447902ms] Apr 9 21:53:38.429: INFO: Created: latency-svc-pj4xz Apr 9 21:53:38.457: INFO: Got endpoints: latency-svc-pj4xz [739.356951ms] Apr 9 21:53:38.457: INFO: Created: latency-svc-bkxsg Apr 9 21:53:38.477: INFO: Got endpoints: latency-svc-bkxsg [759.650984ms] Apr 9 21:53:38.500: INFO: Created: latency-svc-f4tc6 Apr 9 21:53:38.514: INFO: Got endpoints: latency-svc-f4tc6 [756.418664ms] Apr 9 21:53:38.568: INFO: Created: latency-svc-hn9vm Apr 9 21:53:38.571: INFO: Got endpoints: latency-svc-hn9vm [774.049075ms] Apr 9 21:53:38.594: INFO: Created: latency-svc-8bx9s Apr 9 21:53:38.623: INFO: Got endpoints: latency-svc-8bx9s [744.71202ms] Apr 9 21:53:38.655: INFO: Created: latency-svc-l7lgx Apr 9 21:53:38.717: INFO: Got endpoints: latency-svc-l7lgx [821.30165ms] Apr 9 21:53:38.719: INFO: Created: latency-svc-tjvb8 Apr 9 21:53:38.737: INFO: Got endpoints: latency-svc-tjvb8 [778.284957ms] Apr 9 21:53:38.763: INFO: Created: latency-svc-2zghr Apr 9 21:53:38.773: INFO: Got endpoints: latency-svc-2zghr [745.638474ms] Apr 9 21:53:38.806: INFO: Created: latency-svc-g2sd5 Apr 9 21:53:38.848: INFO: Got endpoints: latency-svc-g2sd5 [777.314796ms] Apr 9 21:53:38.848: INFO: Created: latency-svc-v5tsm Apr 9 21:53:38.858: INFO: Got endpoints: latency-svc-v5tsm [710.633692ms] Apr 9 21:53:38.878: INFO: Created: latency-svc-b62sb Apr 9 21:53:38.902: INFO: Got endpoints: latency-svc-b62sb [729.107717ms] Apr 9 21:53:38.926: INFO: Created: latency-svc-2zwgq Apr 9 21:53:38.943: INFO: Got endpoints: latency-svc-2zwgq [721.507168ms] Apr 9 21:53:39.000: INFO: Created: latency-svc-585ns Apr 9 21:53:39.001: INFO: Got endpoints: latency-svc-585ns [721.176101ms] Apr 9 21:53:39.021: INFO: Created: latency-svc-pjpgx Apr 9 21:53:39.033: INFO: Got endpoints: latency-svc-pjpgx [709.862558ms] Apr 9 21:53:39.052: INFO: Created: latency-svc-hmwh8 Apr 9 21:53:39.063: INFO: Got endpoints: latency-svc-hmwh8 [705.943829ms] Apr 9 21:53:39.088: INFO: Created: latency-svc-9gt49 Apr 9 21:53:39.153: INFO: Got endpoints: latency-svc-9gt49 [696.451907ms] Apr 9 21:53:39.156: INFO: Created: latency-svc-s5rs7 Apr 9 21:53:39.160: INFO: Got endpoints: latency-svc-s5rs7 [682.987079ms] Apr 9 21:53:39.201: INFO: Created: latency-svc-tz4pv Apr 9 21:53:39.214: INFO: Got endpoints: latency-svc-tz4pv [700.036754ms] Apr 9 21:53:39.244: INFO: Created: latency-svc-jl77z Apr 9 21:53:39.285: INFO: Got endpoints: latency-svc-jl77z [713.682445ms] Apr 9 21:53:39.297: INFO: Created: latency-svc-nvvnt Apr 9 21:53:39.311: INFO: Got endpoints: latency-svc-nvvnt [687.892769ms] Apr 9 21:53:39.339: INFO: Created: latency-svc-tlf46 Apr 9 21:53:39.368: INFO: Got endpoints: latency-svc-tlf46 [651.412886ms] Apr 9 21:53:39.435: INFO: Created: latency-svc-sbnm5 Apr 9 21:53:39.444: INFO: Got endpoints: latency-svc-sbnm5 [706.676795ms] Apr 9 21:53:39.477: INFO: Created: latency-svc-5ng5h Apr 9 21:53:39.513: INFO: Got endpoints: latency-svc-5ng5h [739.760772ms] Apr 9 21:53:39.594: INFO: Created: latency-svc-p4wrj Apr 9 21:53:39.598: INFO: Got endpoints: latency-svc-p4wrj [750.328437ms] Apr 9 21:53:39.620: INFO: Created: latency-svc-l62nc Apr 9 21:53:39.636: INFO: Got endpoints: latency-svc-l62nc [778.661955ms] Apr 9 21:53:39.657: INFO: Created: latency-svc-ljdrv Apr 9 21:53:39.673: INFO: Got endpoints: latency-svc-ljdrv [771.456052ms] Apr 9 21:53:39.747: INFO: Created: latency-svc-6hmnh Apr 9 21:53:39.783: INFO: Got endpoints: latency-svc-6hmnh [840.732161ms] Apr 9 21:53:39.784: INFO: Created: latency-svc-5hfc9 Apr 9 21:53:39.793: INFO: Got endpoints: latency-svc-5hfc9 [792.088945ms] Apr 9 21:53:39.836: INFO: Created: latency-svc-mqmlj Apr 9 21:53:39.878: INFO: Got endpoints: latency-svc-mqmlj [844.708649ms] Apr 9 21:53:39.885: INFO: Created: latency-svc-swz6z Apr 9 21:53:39.910: INFO: Got endpoints: latency-svc-swz6z [846.096372ms] Apr 9 21:53:39.945: INFO: Created: latency-svc-mn8kl Apr 9 21:53:39.956: INFO: Got endpoints: latency-svc-mn8kl [802.55228ms] Apr 9 21:53:40.022: INFO: Created: latency-svc-kw8ln Apr 9 21:53:40.025: INFO: Got endpoints: latency-svc-kw8ln [865.09349ms] Apr 9 21:53:40.047: INFO: Created: latency-svc-gjvv9 Apr 9 21:53:40.070: INFO: Got endpoints: latency-svc-gjvv9 [855.869749ms] Apr 9 21:53:40.095: INFO: Created: latency-svc-bqllf Apr 9 21:53:40.107: INFO: Got endpoints: latency-svc-bqllf [822.158857ms] Apr 9 21:53:40.166: INFO: Created: latency-svc-wg696 Apr 9 21:53:40.168: INFO: Got endpoints: latency-svc-wg696 [857.066812ms] Apr 9 21:53:40.191: INFO: Created: latency-svc-4j2rn Apr 9 21:53:40.204: INFO: Got endpoints: latency-svc-4j2rn [835.279023ms] Apr 9 21:53:40.322: INFO: Created: latency-svc-twmm8 Apr 9 21:53:40.326: INFO: Got endpoints: latency-svc-twmm8 [881.985062ms] Apr 9 21:53:40.353: INFO: Created: latency-svc-tnb6z Apr 9 21:53:40.367: INFO: Got endpoints: latency-svc-tnb6z [853.277552ms] Apr 9 21:53:40.401: INFO: Created: latency-svc-tw9q4 Apr 9 21:53:40.465: INFO: Got endpoints: latency-svc-tw9q4 [866.62482ms] Apr 9 21:53:40.478: INFO: Created: latency-svc-hbc77 Apr 9 21:53:40.492: INFO: Got endpoints: latency-svc-hbc77 [855.935234ms] Apr 9 21:53:40.514: INFO: Created: latency-svc-xvlfm Apr 9 21:53:40.529: INFO: Got endpoints: latency-svc-xvlfm [855.536153ms] Apr 9 21:53:40.550: INFO: Created: latency-svc-v9qrh Apr 9 21:53:40.621: INFO: Got endpoints: latency-svc-v9qrh [837.0488ms] Apr 9 21:53:40.623: INFO: Created: latency-svc-78zpj Apr 9 21:53:40.625: INFO: Got endpoints: latency-svc-78zpj [832.171099ms] Apr 9 21:53:40.666: INFO: Created: latency-svc-d8czs Apr 9 21:53:40.679: INFO: Got endpoints: latency-svc-d8czs [801.172584ms] Apr 9 21:53:40.700: INFO: Created: latency-svc-cpb5g Apr 9 21:53:40.716: INFO: Got endpoints: latency-svc-cpb5g [806.820245ms] Apr 9 21:53:40.766: INFO: Created: latency-svc-696lc Apr 9 21:53:40.791: INFO: Got endpoints: latency-svc-696lc [835.218958ms] Apr 9 21:53:40.828: INFO: Created: latency-svc-xc67t Apr 9 21:53:40.849: INFO: Got endpoints: latency-svc-xc67t [823.899854ms] Apr 9 21:53:40.896: INFO: Created: latency-svc-f5fcp Apr 9 21:53:40.903: INFO: Got endpoints: latency-svc-f5fcp [832.845044ms] Apr 9 21:53:40.922: INFO: Created: latency-svc-hqqrm Apr 9 21:53:40.934: INFO: Got endpoints: latency-svc-hqqrm [826.374885ms] Apr 9 21:53:40.953: INFO: Created: latency-svc-4bqkg Apr 9 21:53:40.964: INFO: Got endpoints: latency-svc-4bqkg [795.71007ms] Apr 9 21:53:40.983: INFO: Created: latency-svc-mpksv Apr 9 21:53:41.022: INFO: Got endpoints: latency-svc-mpksv [817.767343ms] Apr 9 21:53:41.043: INFO: Created: latency-svc-bpz5g Apr 9 21:53:41.054: INFO: Got endpoints: latency-svc-bpz5g [728.218226ms] Apr 9 21:53:41.079: INFO: Created: latency-svc-2hmrq Apr 9 21:53:41.102: INFO: Got endpoints: latency-svc-2hmrq [735.619504ms] Apr 9 21:53:41.178: INFO: Created: latency-svc-mqfvl Apr 9 21:53:41.181: INFO: Got endpoints: latency-svc-mqfvl [716.470565ms] Apr 9 21:53:41.223: INFO: Created: latency-svc-g7rwl Apr 9 21:53:41.236: INFO: Got endpoints: latency-svc-g7rwl [743.378654ms] Apr 9 21:53:41.255: INFO: Created: latency-svc-8t99q Apr 9 21:53:41.266: INFO: Got endpoints: latency-svc-8t99q [737.625936ms] Apr 9 21:53:41.361: INFO: Created: latency-svc-b5n2v Apr 9 21:53:41.375: INFO: Got endpoints: latency-svc-b5n2v [754.060675ms] Apr 9 21:53:41.402: INFO: Created: latency-svc-8fgms Apr 9 21:53:41.417: INFO: Got endpoints: latency-svc-8fgms [791.609847ms] Apr 9 21:53:41.470: INFO: Created: latency-svc-5vs95 Apr 9 21:53:41.511: INFO: Got endpoints: latency-svc-5vs95 [831.883978ms] Apr 9 21:53:41.603: INFO: Created: latency-svc-hsjwd Apr 9 21:53:41.609: INFO: Got endpoints: latency-svc-hsjwd [892.95524ms] Apr 9 21:53:41.636: INFO: Created: latency-svc-cvg7k Apr 9 21:53:41.655: INFO: Got endpoints: latency-svc-cvg7k [863.844887ms] Apr 9 21:53:41.685: INFO: Created: latency-svc-6hbnh Apr 9 21:53:41.700: INFO: Got endpoints: latency-svc-6hbnh [850.513279ms] Apr 9 21:53:41.740: INFO: Created: latency-svc-tkrmd Apr 9 21:53:41.748: INFO: Got endpoints: latency-svc-tkrmd [845.029953ms] Apr 9 21:53:41.768: INFO: Created: latency-svc-lkfsd Apr 9 21:53:41.785: INFO: Got endpoints: latency-svc-lkfsd [851.643803ms] Apr 9 21:53:41.829: INFO: Created: latency-svc-pkx5z Apr 9 21:53:41.839: INFO: Got endpoints: latency-svc-pkx5z [875.316665ms] Apr 9 21:53:41.910: INFO: Created: latency-svc-8l2gl Apr 9 21:53:41.917: INFO: Got endpoints: latency-svc-8l2gl [895.930795ms] Apr 9 21:53:41.938: INFO: Created: latency-svc-srsb4 Apr 9 21:53:41.953: INFO: Got endpoints: latency-svc-srsb4 [899.151455ms] Apr 9 21:53:41.972: INFO: Created: latency-svc-dg4nf Apr 9 21:53:41.984: INFO: Got endpoints: latency-svc-dg4nf [881.506982ms] Apr 9 21:53:42.002: INFO: Created: latency-svc-94t6l Apr 9 21:53:42.039: INFO: Got endpoints: latency-svc-94t6l [857.9681ms] Apr 9 21:53:42.050: INFO: Created: latency-svc-ddgxl Apr 9 21:53:42.078: INFO: Got endpoints: latency-svc-ddgxl [842.449783ms] Apr 9 21:53:42.111: INFO: Created: latency-svc-lmskt Apr 9 21:53:42.122: INFO: Got endpoints: latency-svc-lmskt [855.926319ms] Apr 9 21:53:42.178: INFO: Created: latency-svc-vptrn Apr 9 21:53:42.181: INFO: Got endpoints: latency-svc-vptrn [806.289676ms] Apr 9 21:53:42.224: INFO: Created: latency-svc-bjs7t Apr 9 21:53:42.237: INFO: Got endpoints: latency-svc-bjs7t [819.935137ms] Apr 9 21:53:42.254: INFO: Created: latency-svc-n8lpr Apr 9 21:53:42.269: INFO: Got endpoints: latency-svc-n8lpr [757.358673ms] Apr 9 21:53:42.315: INFO: Created: latency-svc-8bhbc Apr 9 21:53:42.318: INFO: Got endpoints: latency-svc-8bhbc [708.523554ms] Apr 9 21:53:42.357: INFO: Created: latency-svc-q8nh8 Apr 9 21:53:42.370: INFO: Got endpoints: latency-svc-q8nh8 [714.70429ms] Apr 9 21:53:42.387: INFO: Created: latency-svc-2hpn9 Apr 9 21:53:42.411: INFO: Got endpoints: latency-svc-2hpn9 [710.670412ms] Apr 9 21:53:42.471: INFO: Created: latency-svc-fg2z8 Apr 9 21:53:42.479: INFO: Got endpoints: latency-svc-fg2z8 [730.777844ms] Apr 9 21:53:42.507: INFO: Created: latency-svc-dkf8s Apr 9 21:53:42.527: INFO: Got endpoints: latency-svc-dkf8s [741.669199ms] Apr 9 21:53:42.567: INFO: Created: latency-svc-5zbqk Apr 9 21:53:42.609: INFO: Got endpoints: latency-svc-5zbqk [769.561605ms] Apr 9 21:53:42.620: INFO: Created: latency-svc-zzrn5 Apr 9 21:53:42.635: INFO: Got endpoints: latency-svc-zzrn5 [717.592064ms] Apr 9 21:53:42.663: INFO: Created: latency-svc-6rdgd Apr 9 21:53:42.684: INFO: Got endpoints: latency-svc-6rdgd [730.307378ms] Apr 9 21:53:42.704: INFO: Created: latency-svc-8kkjl Apr 9 21:53:42.746: INFO: Got endpoints: latency-svc-8kkjl [761.955183ms] Apr 9 21:53:42.761: INFO: Created: latency-svc-zjmkz Apr 9 21:53:42.780: INFO: Got endpoints: latency-svc-zjmkz [740.607174ms] Apr 9 21:53:42.807: INFO: Created: latency-svc-btjbt Apr 9 21:53:42.822: INFO: Got endpoints: latency-svc-btjbt [743.765451ms] Apr 9 21:53:42.890: INFO: Created: latency-svc-gx7tw Apr 9 21:53:42.894: INFO: Got endpoints: latency-svc-gx7tw [771.826569ms] Apr 9 21:53:42.920: INFO: Created: latency-svc-8qvjt Apr 9 21:53:42.931: INFO: Got endpoints: latency-svc-8qvjt [749.818708ms] Apr 9 21:53:42.950: INFO: Created: latency-svc-f2kdd Apr 9 21:53:42.961: INFO: Got endpoints: latency-svc-f2kdd [724.416055ms] Apr 9 21:53:42.982: INFO: Created: latency-svc-fbmkd Apr 9 21:53:43.058: INFO: Got endpoints: latency-svc-fbmkd [788.786754ms] Apr 9 21:53:43.060: INFO: Created: latency-svc-bskmf Apr 9 21:53:43.064: INFO: Got endpoints: latency-svc-bskmf [745.508982ms] Apr 9 21:53:43.082: INFO: Created: latency-svc-z257l Apr 9 21:53:43.094: INFO: Got endpoints: latency-svc-z257l [724.458863ms] Apr 9 21:53:43.112: INFO: Created: latency-svc-kcf2t Apr 9 21:53:43.124: INFO: Got endpoints: latency-svc-kcf2t [713.493369ms] Apr 9 21:53:43.142: INFO: Created: latency-svc-s68kj Apr 9 21:53:43.155: INFO: Got endpoints: latency-svc-s68kj [675.555026ms] Apr 9 21:53:43.219: INFO: Created: latency-svc-zczv2 Apr 9 21:53:43.227: INFO: Got endpoints: latency-svc-zczv2 [700.116167ms] Apr 9 21:53:43.262: INFO: Created: latency-svc-xklk2 Apr 9 21:53:43.292: INFO: Got endpoints: latency-svc-xklk2 [683.517438ms] Apr 9 21:53:43.316: INFO: Created: latency-svc-rcsh6 Apr 9 21:53:43.357: INFO: Got endpoints: latency-svc-rcsh6 [721.84563ms] Apr 9 21:53:43.371: INFO: Created: latency-svc-cdp9q Apr 9 21:53:43.384: INFO: Got endpoints: latency-svc-cdp9q [700.041504ms] Apr 9 21:53:43.401: INFO: Created: latency-svc-hqccd Apr 9 21:53:43.421: INFO: Got endpoints: latency-svc-hqccd [675.084783ms] Apr 9 21:53:43.449: INFO: Created: latency-svc-lpr8s Apr 9 21:53:43.507: INFO: Got endpoints: latency-svc-lpr8s [726.502149ms] Apr 9 21:53:43.538: INFO: Created: latency-svc-8n4mr Apr 9 21:53:43.553: INFO: Got endpoints: latency-svc-8n4mr [730.528436ms] Apr 9 21:53:43.605: INFO: Created: latency-svc-cmktc Apr 9 21:53:43.650: INFO: Got endpoints: latency-svc-cmktc [755.968264ms] Apr 9 21:53:43.665: INFO: Created: latency-svc-2jgqs Apr 9 21:53:43.679: INFO: Got endpoints: latency-svc-2jgqs [747.979246ms] Apr 9 21:53:43.695: INFO: Created: latency-svc-n2d64 Apr 9 21:53:43.709: INFO: Got endpoints: latency-svc-n2d64 [747.869214ms] Apr 9 21:53:43.730: INFO: Created: latency-svc-tvhzn Apr 9 21:53:43.745: INFO: Got endpoints: latency-svc-tvhzn [687.610712ms] Apr 9 21:53:43.782: INFO: Created: latency-svc-6j224 Apr 9 21:53:43.794: INFO: Got endpoints: latency-svc-6j224 [730.442607ms] Apr 9 21:53:43.815: INFO: Created: latency-svc-z8t6k Apr 9 21:53:43.818: INFO: Got endpoints: latency-svc-z8t6k [723.209334ms] Apr 9 21:53:43.837: INFO: Created: latency-svc-gb8r8 Apr 9 21:53:43.848: INFO: Got endpoints: latency-svc-gb8r8 [724.063455ms] Apr 9 21:53:43.873: INFO: Created: latency-svc-p7g4j Apr 9 21:53:43.913: INFO: Got endpoints: latency-svc-p7g4j [758.813151ms] Apr 9 21:53:43.926: INFO: Created: latency-svc-jdltz Apr 9 21:53:43.940: INFO: Got endpoints: latency-svc-jdltz [713.203424ms] Apr 9 21:53:43.968: INFO: Created: latency-svc-4vjjn Apr 9 21:53:43.983: INFO: Got endpoints: latency-svc-4vjjn [690.448995ms] Apr 9 21:53:44.005: INFO: Created: latency-svc-sz99z Apr 9 21:53:44.052: INFO: Got endpoints: latency-svc-sz99z [694.54443ms] Apr 9 21:53:44.064: INFO: Created: latency-svc-kq9ph Apr 9 21:53:44.079: INFO: Got endpoints: latency-svc-kq9ph [695.499371ms] Apr 9 21:53:44.101: INFO: Created: latency-svc-hj9pf Apr 9 21:53:44.109: INFO: Got endpoints: latency-svc-hj9pf [688.120501ms] Apr 9 21:53:44.130: INFO: Created: latency-svc-p7jt7 Apr 9 21:53:44.146: INFO: Got endpoints: latency-svc-p7jt7 [639.165434ms] Apr 9 21:53:44.195: INFO: Created: latency-svc-2hnvz Apr 9 21:53:44.198: INFO: Got endpoints: latency-svc-2hnvz [645.554879ms] Apr 9 21:53:44.227: INFO: Created: latency-svc-p2bhk Apr 9 21:53:44.242: INFO: Got endpoints: latency-svc-p2bhk [591.879939ms] Apr 9 21:53:44.263: INFO: Created: latency-svc-7n8wv Apr 9 21:53:44.281: INFO: Got endpoints: latency-svc-7n8wv [601.778436ms] Apr 9 21:53:44.339: INFO: Created: latency-svc-slssd Apr 9 21:53:44.342: INFO: Got endpoints: latency-svc-slssd [632.488547ms] Apr 9 21:53:44.364: INFO: Created: latency-svc-rldcq Apr 9 21:53:44.375: INFO: Got endpoints: latency-svc-rldcq [629.664253ms] Apr 9 21:53:44.407: INFO: Created: latency-svc-72bdk Apr 9 21:53:44.431: INFO: Got endpoints: latency-svc-72bdk [636.66056ms] Apr 9 21:53:44.495: INFO: Created: latency-svc-qsbcr Apr 9 21:53:44.497: INFO: Got endpoints: latency-svc-qsbcr [679.810654ms] Apr 9 21:53:44.521: INFO: Created: latency-svc-2x6xm Apr 9 21:53:44.532: INFO: Got endpoints: latency-svc-2x6xm [683.633449ms] Apr 9 21:53:44.550: INFO: Created: latency-svc-26b6k Apr 9 21:53:44.562: INFO: Got endpoints: latency-svc-26b6k [648.704296ms] Apr 9 21:53:44.586: INFO: Created: latency-svc-zsv2z Apr 9 21:53:44.632: INFO: Got endpoints: latency-svc-zsv2z [691.802368ms] Apr 9 21:53:44.652: INFO: Created: latency-svc-rz68x Apr 9 21:53:44.665: INFO: Got endpoints: latency-svc-rz68x [682.059271ms] Apr 9 21:53:44.682: INFO: Created: latency-svc-vdddm Apr 9 21:53:44.695: INFO: Got endpoints: latency-svc-vdddm [643.573082ms] Apr 9 21:53:44.714: INFO: Created: latency-svc-mrlbl Apr 9 21:53:44.794: INFO: Got endpoints: latency-svc-mrlbl [714.77189ms] Apr 9 21:53:44.797: INFO: Created: latency-svc-sqnwj Apr 9 21:53:44.804: INFO: Got endpoints: latency-svc-sqnwj [694.549047ms] Apr 9 21:53:44.833: INFO: Created: latency-svc-5hmrv Apr 9 21:53:44.858: INFO: Got endpoints: latency-svc-5hmrv [712.448836ms] Apr 9 21:53:44.893: INFO: Created: latency-svc-mxzc7 Apr 9 21:53:44.942: INFO: Got endpoints: latency-svc-mxzc7 [743.075452ms] Apr 9 21:53:44.952: INFO: Created: latency-svc-7l9rs Apr 9 21:53:44.966: INFO: Got endpoints: latency-svc-7l9rs [724.171754ms] Apr 9 21:53:44.994: INFO: Created: latency-svc-rp4q9 Apr 9 21:53:45.009: INFO: Got endpoints: latency-svc-rp4q9 [727.959208ms] Apr 9 21:53:45.025: INFO: Created: latency-svc-6cfzw Apr 9 21:53:45.094: INFO: Got endpoints: latency-svc-6cfzw [751.767698ms] Apr 9 21:53:45.095: INFO: Created: latency-svc-k92cx Apr 9 21:53:45.116: INFO: Got endpoints: latency-svc-k92cx [740.722105ms] Apr 9 21:53:45.156: INFO: Created: latency-svc-fvlq9 Apr 9 21:53:45.172: INFO: Got endpoints: latency-svc-fvlq9 [740.868533ms] Apr 9 21:53:45.192: INFO: Created: latency-svc-898d5 Apr 9 21:53:45.255: INFO: Got endpoints: latency-svc-898d5 [757.603748ms] Apr 9 21:53:45.265: INFO: Created: latency-svc-fx2v7 Apr 9 21:53:45.280: INFO: Got endpoints: latency-svc-fx2v7 [748.281004ms] Apr 9 21:53:45.330: INFO: Created: latency-svc-5zqzw Apr 9 21:53:45.346: INFO: Got endpoints: latency-svc-5zqzw [784.030201ms] Apr 9 21:53:45.393: INFO: Created: latency-svc-mpbs9 Apr 9 21:53:45.422: INFO: Got endpoints: latency-svc-mpbs9 [789.726586ms] Apr 9 21:53:45.422: INFO: Created: latency-svc-mxh94 Apr 9 21:53:45.437: INFO: Got endpoints: latency-svc-mxh94 [772.158696ms] Apr 9 21:53:45.463: INFO: Created: latency-svc-mrcp6 Apr 9 21:53:45.487: INFO: Got endpoints: latency-svc-mrcp6 [791.920615ms] Apr 9 21:53:45.555: INFO: Created: latency-svc-k9mjt Apr 9 21:53:45.563: INFO: Got endpoints: latency-svc-k9mjt [769.338601ms] Apr 9 21:53:45.595: INFO: Created: latency-svc-k8q8s Apr 9 21:53:45.606: INFO: Got endpoints: latency-svc-k8q8s [802.473579ms] Apr 9 21:53:45.625: INFO: Created: latency-svc-c6qkk Apr 9 21:53:45.698: INFO: Got endpoints: latency-svc-c6qkk [840.000587ms] Apr 9 21:53:45.709: INFO: Created: latency-svc-x6gjw Apr 9 21:53:45.720: INFO: Got endpoints: latency-svc-x6gjw [778.653446ms] Apr 9 21:53:45.744: INFO: Created: latency-svc-jpwst Apr 9 21:53:45.757: INFO: Got endpoints: latency-svc-jpwst [790.090776ms] Apr 9 21:53:45.775: INFO: Created: latency-svc-p4v2j Apr 9 21:53:45.787: INFO: Got endpoints: latency-svc-p4v2j [778.027384ms] Apr 9 21:53:45.836: INFO: Created: latency-svc-dqld9 Apr 9 21:53:45.840: INFO: Got endpoints: latency-svc-dqld9 [745.958459ms] Apr 9 21:53:45.867: INFO: Created: latency-svc-j6578 Apr 9 21:53:45.878: INFO: Got endpoints: latency-svc-j6578 [762.620552ms] Apr 9 21:53:45.913: INFO: Created: latency-svc-dzwb8 Apr 9 21:53:45.926: INFO: Got endpoints: latency-svc-dzwb8 [753.846849ms] Apr 9 21:53:45.992: INFO: Created: latency-svc-dzhp5 Apr 9 21:53:45.996: INFO: Got endpoints: latency-svc-dzhp5 [740.839333ms] Apr 9 21:53:46.014: INFO: Created: latency-svc-h2qlt Apr 9 21:53:46.029: INFO: Got endpoints: latency-svc-h2qlt [748.335515ms] Apr 9 21:53:46.050: INFO: Created: latency-svc-lpqhn Apr 9 21:53:46.065: INFO: Got endpoints: latency-svc-lpqhn [718.166812ms] Apr 9 21:53:46.081: INFO: Created: latency-svc-9xpcc Apr 9 21:53:46.135: INFO: Got endpoints: latency-svc-9xpcc [713.179396ms] Apr 9 21:53:46.137: INFO: Created: latency-svc-x2bm5 Apr 9 21:53:46.143: INFO: Got endpoints: latency-svc-x2bm5 [705.614934ms] Apr 9 21:53:46.164: INFO: Created: latency-svc-csctc Apr 9 21:53:46.194: INFO: Got endpoints: latency-svc-csctc [706.707873ms] Apr 9 21:53:46.230: INFO: Created: latency-svc-5st58 Apr 9 21:53:46.285: INFO: Got endpoints: latency-svc-5st58 [721.287033ms] Apr 9 21:53:46.287: INFO: Created: latency-svc-tmpmt Apr 9 21:53:46.292: INFO: Got endpoints: latency-svc-tmpmt [686.040657ms] Apr 9 21:53:46.315: INFO: Created: latency-svc-j6wwg Apr 9 21:53:46.329: INFO: Got endpoints: latency-svc-j6wwg [631.02422ms] Apr 9 21:53:46.345: INFO: Created: latency-svc-7srsh Apr 9 21:53:46.359: INFO: Got endpoints: latency-svc-7srsh [639.077437ms] Apr 9 21:53:46.380: INFO: Created: latency-svc-knvvs Apr 9 21:53:46.435: INFO: Got endpoints: latency-svc-knvvs [678.032475ms] Apr 9 21:53:46.436: INFO: Created: latency-svc-gdsch Apr 9 21:53:46.450: INFO: Got endpoints: latency-svc-gdsch [662.903657ms] Apr 9 21:53:46.471: INFO: Created: latency-svc-7hzdb Apr 9 21:53:46.501: INFO: Got endpoints: latency-svc-7hzdb [661.119984ms] Apr 9 21:53:46.531: INFO: Created: latency-svc-vz9sk Apr 9 21:53:46.572: INFO: Got endpoints: latency-svc-vz9sk [693.869933ms] Apr 9 21:53:46.584: INFO: Created: latency-svc-brctg Apr 9 21:53:46.601: INFO: Got endpoints: latency-svc-brctg [674.90973ms] Apr 9 21:53:46.626: INFO: Created: latency-svc-lsh4c Apr 9 21:53:46.637: INFO: Got endpoints: latency-svc-lsh4c [640.841137ms] Apr 9 21:53:46.657: INFO: Created: latency-svc-jk65k Apr 9 21:53:46.704: INFO: Got endpoints: latency-svc-jk65k [675.117526ms] Apr 9 21:53:46.716: INFO: Created: latency-svc-znkrh Apr 9 21:53:46.743: INFO: Got endpoints: latency-svc-znkrh [678.375911ms] Apr 9 21:53:46.764: INFO: Created: latency-svc-lm76z Apr 9 21:53:46.776: INFO: Got endpoints: latency-svc-lm76z [640.970803ms] Apr 9 21:53:46.794: INFO: Created: latency-svc-lljp7 Apr 9 21:53:46.848: INFO: Got endpoints: latency-svc-lljp7 [705.135354ms] Apr 9 21:53:46.885: INFO: Created: latency-svc-mg8nv Apr 9 21:53:46.914: INFO: Got endpoints: latency-svc-mg8nv [720.315553ms] Apr 9 21:53:46.940: INFO: Created: latency-svc-ff7p2 Apr 9 21:53:46.997: INFO: Got endpoints: latency-svc-ff7p2 [712.459183ms] Apr 9 21:53:47.002: INFO: Created: latency-svc-9m5n6 Apr 9 21:53:47.034: INFO: Got endpoints: latency-svc-9m5n6 [741.999896ms] Apr 9 21:53:47.034: INFO: Created: latency-svc-zn6kb Apr 9 21:53:47.071: INFO: Got endpoints: latency-svc-zn6kb [741.167418ms] Apr 9 21:53:47.095: INFO: Created: latency-svc-vjvfl Apr 9 21:53:47.153: INFO: Got endpoints: latency-svc-vjvfl [794.002099ms] Apr 9 21:53:47.156: INFO: Created: latency-svc-lqzf6 Apr 9 21:53:47.167: INFO: Got endpoints: latency-svc-lqzf6 [732.839838ms] Apr 9 21:53:47.191: INFO: Created: latency-svc-qflhs Apr 9 21:53:47.214: INFO: Got endpoints: latency-svc-qflhs [764.053645ms] Apr 9 21:53:47.245: INFO: Created: latency-svc-6vfkq Apr 9 21:53:47.297: INFO: Got endpoints: latency-svc-6vfkq [796.537771ms] Apr 9 21:53:47.311: INFO: Created: latency-svc-lgskj Apr 9 21:53:47.324: INFO: Got endpoints: latency-svc-lgskj [751.712807ms] Apr 9 21:53:47.348: INFO: Created: latency-svc-dv9ws Apr 9 21:53:47.361: INFO: Got endpoints: latency-svc-dv9ws [760.416697ms] Apr 9 21:53:47.382: INFO: Created: latency-svc-254z9 Apr 9 21:53:47.447: INFO: Got endpoints: latency-svc-254z9 [810.040207ms] Apr 9 21:53:47.454: INFO: Created: latency-svc-5954s Apr 9 21:53:47.459: INFO: Got endpoints: latency-svc-5954s [754.598032ms] Apr 9 21:53:47.478: INFO: Created: latency-svc-h6xv6 Apr 9 21:53:47.487: INFO: Got endpoints: latency-svc-h6xv6 [744.087281ms] Apr 9 21:53:47.521: INFO: Created: latency-svc-gv5hx Apr 9 21:53:47.602: INFO: Got endpoints: latency-svc-gv5hx [825.857734ms] Apr 9 21:53:47.610: INFO: Created: latency-svc-k7p8t Apr 9 21:53:47.627: INFO: Got endpoints: latency-svc-k7p8t [778.864729ms] Apr 9 21:53:47.652: INFO: Created: latency-svc-mtk4q Apr 9 21:53:47.662: INFO: Got endpoints: latency-svc-mtk4q [747.947656ms] Apr 9 21:53:47.682: INFO: Created: latency-svc-7dpbf Apr 9 21:53:47.740: INFO: Got endpoints: latency-svc-7dpbf [742.810711ms] Apr 9 21:53:47.740: INFO: Latencies: [40.878983ms 80.263773ms 161.249475ms 178.460994ms 241.650363ms 310.644093ms 353.27074ms 429.914094ms 455.360753ms 503.946607ms 560.737447ms 591.879939ms 601.778436ms 604.492552ms 629.664253ms 631.02422ms 632.488547ms 636.66056ms 639.077437ms 639.165434ms 640.447902ms 640.841137ms 640.970803ms 643.573082ms 645.554879ms 648.704296ms 651.412886ms 661.119984ms 662.903657ms 674.90973ms 675.084783ms 675.117526ms 675.555026ms 678.032475ms 678.375911ms 679.810654ms 682.059271ms 682.987079ms 683.517438ms 683.633449ms 686.040657ms 687.610712ms 687.892769ms 688.120501ms 690.448995ms 691.802368ms 693.869933ms 694.54443ms 694.549047ms 695.499371ms 696.451907ms 700.036754ms 700.041504ms 700.116167ms 705.135354ms 705.614934ms 705.943829ms 706.676795ms 706.707873ms 708.523554ms 709.862558ms 710.633692ms 710.670412ms 712.448836ms 712.459183ms 713.179396ms 713.203424ms 713.493369ms 713.682445ms 714.70429ms 714.77189ms 716.470565ms 717.592064ms 718.166812ms 720.315553ms 721.176101ms 721.287033ms 721.507168ms 721.84563ms 723.209334ms 724.063455ms 724.171754ms 724.416055ms 724.458863ms 726.502149ms 727.959208ms 728.218226ms 729.107717ms 730.307378ms 730.442607ms 730.528436ms 730.777844ms 732.839838ms 735.619504ms 737.625936ms 739.356951ms 739.760772ms 740.607174ms 740.722105ms 740.839333ms 740.868533ms 741.167418ms 741.669199ms 741.999896ms 742.810711ms 743.075452ms 743.378654ms 743.765451ms 744.087281ms 744.71202ms 745.508982ms 745.638474ms 745.958459ms 747.869214ms 747.947656ms 747.979246ms 748.281004ms 748.335515ms 749.818708ms 750.328437ms 751.712807ms 751.767698ms 753.846849ms 754.060675ms 754.598032ms 755.968264ms 756.418664ms 757.358673ms 757.603748ms 758.813151ms 759.650984ms 760.416697ms 761.955183ms 762.620552ms 764.053645ms 769.338601ms 769.561605ms 771.456052ms 771.826569ms 772.158696ms 774.049075ms 777.314796ms 778.027384ms 778.284957ms 778.653446ms 778.661955ms 778.864729ms 784.030201ms 788.786754ms 789.726586ms 790.090776ms 791.609847ms 791.920615ms 792.088945ms 794.002099ms 795.71007ms 796.537771ms 801.172584ms 802.473579ms 802.55228ms 806.289676ms 806.820245ms 810.040207ms 817.767343ms 819.935137ms 821.30165ms 822.158857ms 823.899854ms 825.857734ms 826.374885ms 831.883978ms 832.171099ms 832.845044ms 835.218958ms 835.279023ms 837.0488ms 840.000587ms 840.732161ms 842.449783ms 844.708649ms 845.029953ms 846.096372ms 850.513279ms 851.643803ms 853.277552ms 855.536153ms 855.869749ms 855.926319ms 855.935234ms 857.066812ms 857.9681ms 863.844887ms 865.09349ms 866.62482ms 875.316665ms 881.506982ms 881.985062ms 892.95524ms 895.930795ms 899.151455ms] Apr 9 21:53:47.740: INFO: 50 %ile: 740.868533ms Apr 9 21:53:47.740: INFO: 90 %ile: 845.029953ms Apr 9 21:53:47.740: INFO: 99 %ile: 895.930795ms Apr 9 21:53:47.740: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:47.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5237" for this suite. • [SLOW TEST:13.308 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":168,"skipped":2724,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:47.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 9 21:53:47.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1167' Apr 9 21:53:48.131: INFO: stderr: "" Apr 9 21:53:48.131: INFO: stdout: "pod/pause created\n" Apr 9 21:53:48.131: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 9 21:53:48.132: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1167" to be "running and ready" Apr 9 21:53:48.162: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.660163ms Apr 9 21:53:50.166: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03428013s Apr 9 21:53:52.170: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.038037208s Apr 9 21:53:52.170: INFO: Pod "pause" satisfied condition "running and ready" Apr 9 21:53:52.170: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 9 21:53:52.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1167' Apr 9 21:53:52.277: INFO: stderr: "" Apr 9 21:53:52.277: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 9 21:53:52.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1167' Apr 9 21:53:52.385: INFO: stderr: "" Apr 9 21:53:52.385: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 9 21:53:52.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1167' Apr 9 21:53:52.491: INFO: stderr: "" Apr 9 21:53:52.491: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 9 21:53:52.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1167' Apr 9 21:53:52.602: INFO: stderr: "" Apr 9 21:53:52.602: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 9 21:53:52.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1167' Apr 9 21:53:52.746: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 21:53:52.746: INFO: stdout: "pod \"pause\" force deleted\n" Apr 9 21:53:52.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1167' Apr 9 21:53:52.867: INFO: stderr: "No resources found in kubectl-1167 namespace.\n" Apr 9 21:53:52.867: INFO: stdout: "" Apr 9 21:53:52.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1167 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 21:53:53.032: INFO: stderr: "" Apr 9 21:53:53.032: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:53:53.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1167" for this suite. • [SLOW TEST:5.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":169,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:53:53.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 21:53:53.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7584' Apr 9 21:53:53.441: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 9 21:53:53.441: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 9 21:53:53.493: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 9 21:53:53.585: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 9 21:53:53.636: INFO: scanned /root for discovery docs: Apr 9 21:53:53.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7584' Apr 9 21:54:09.688: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 9 21:54:09.688: INFO: stdout: "Created e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f\nScaling up e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 9 21:54:09.689: INFO: stdout: "Created e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f\nScaling up e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 9 21:54:09.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7584' Apr 9 21:54:09.831: INFO: stderr: "" Apr 9 21:54:09.831: INFO: stdout: "e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f-lb82w " Apr 9 21:54:09.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f-lb82w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7584' Apr 9 21:54:09.962: INFO: stderr: "" Apr 9 21:54:09.963: INFO: stdout: "true" Apr 9 21:54:09.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f-lb82w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7584' Apr 9 21:54:10.123: INFO: stderr: "" Apr 9 21:54:10.123: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 9 21:54:10.123: INFO: e2e-test-httpd-rc-5885d96cb93cf8148f01a3fa5aaeee4f-lb82w is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 9 21:54:10.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7584' Apr 9 21:54:10.277: INFO: stderr: "" Apr 9 21:54:10.277: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:54:10.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7584" for this suite. • [SLOW TEST:17.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":170,"skipped":2764,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:54:10.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 9 21:54:15.291: INFO: Successfully updated pod "annotationupdate0e1b329e-216b-41cb-9624-d75d4cd36fb4" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:54:17.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1678" for this suite. • [SLOW TEST:7.001 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2775,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:54:17.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f49c1311-3d68-4bfd-bac8-c9732feeec50 STEP: Creating a pod to test consume configMaps Apr 9 21:54:17.400: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c" in namespace "projected-4283" to be "success or failure" Apr 9 21:54:17.405: INFO: Pod "pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134732ms Apr 9 21:54:19.417: INFO: Pod "pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016707489s Apr 9 21:54:21.421: INFO: Pod "pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020340925s STEP: Saw pod success Apr 9 21:54:21.421: INFO: Pod "pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c" satisfied condition "success or failure" Apr 9 21:54:21.423: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c container projected-configmap-volume-test: STEP: delete the pod Apr 9 21:54:21.470: INFO: Waiting for pod pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c to disappear Apr 9 21:54:21.478: INFO: Pod pod-projected-configmaps-abd10887-736b-40bc-a2aa-b1ae6975646c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:54:21.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4283" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2780,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:54:21.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:54:21.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8" in namespace "projected-242" to be "success or failure" Apr 9 21:54:21.587: INFO: Pod "downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.859345ms Apr 9 21:54:23.635: INFO: Pod "downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058229037s Apr 9 21:54:25.640: INFO: Pod "downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06283444s STEP: Saw pod success Apr 9 21:54:25.640: INFO: Pod "downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8" satisfied condition "success or failure" Apr 9 21:54:25.643: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8 container client-container: STEP: delete the pod Apr 9 21:54:25.712: INFO: Waiting for pod downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8 to disappear Apr 9 21:54:25.718: INFO: Pod downwardapi-volume-e07c553f-d818-473e-ac07-ebb2adb23ea8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:54:25.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-242" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:54:25.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 9 21:54:26.369: INFO: Pod name wrapped-volume-race-f933cb5f-0bb0-47e4-ad55-697ba1bbb8e3: Found 0 pods out of 5 Apr 9 21:54:31.390: INFO: Pod name wrapped-volume-race-f933cb5f-0bb0-47e4-ad55-697ba1bbb8e3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f933cb5f-0bb0-47e4-ad55-697ba1bbb8e3 in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods Apr 9 21:54:45.472: INFO: Deleting ReplicationController wrapped-volume-race-f933cb5f-0bb0-47e4-ad55-697ba1bbb8e3 took: 6.692849ms Apr 9 21:54:45.872: INFO: Terminating ReplicationController wrapped-volume-race-f933cb5f-0bb0-47e4-ad55-697ba1bbb8e3 pods took: 400.282261ms STEP: Creating RC which spawns configmap-volume pods Apr 9 21:55:00.627: INFO: Pod name wrapped-volume-race-4924a641-a444-4925-bf55-0ba46b417751: Found 0 pods out of 5 Apr 9 21:55:05.635: INFO: Pod name wrapped-volume-race-4924a641-a444-4925-bf55-0ba46b417751: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4924a641-a444-4925-bf55-0ba46b417751 in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods Apr 9 21:55:19.720: INFO: Deleting ReplicationController wrapped-volume-race-4924a641-a444-4925-bf55-0ba46b417751 took: 6.434366ms Apr 9 21:55:20.020: INFO: Terminating ReplicationController wrapped-volume-race-4924a641-a444-4925-bf55-0ba46b417751 pods took: 300.276459ms STEP: Creating RC which spawns configmap-volume pods Apr 9 21:55:30.360: INFO: Pod name wrapped-volume-race-02b7db5d-27ce-413d-95b9-df24f3808e5f: Found 0 pods out of 5 Apr 9 21:55:35.366: INFO: Pod name wrapped-volume-race-02b7db5d-27ce-413d-95b9-df24f3808e5f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-02b7db5d-27ce-413d-95b9-df24f3808e5f in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods Apr 9 21:55:49.449: INFO: Deleting ReplicationController wrapped-volume-race-02b7db5d-27ce-413d-95b9-df24f3808e5f took: 7.120176ms Apr 9 21:55:49.749: INFO: Terminating ReplicationController wrapped-volume-race-02b7db5d-27ce-413d-95b9-df24f3808e5f pods took: 300.441117ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:00.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8588" for this suite. • [SLOW TEST:94.832 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":174,"skipped":2807,"failed":0} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:00.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-93287f09-9df9-4e0c-921b-1cb9868ad0d8 STEP: Creating a pod to test consume configMaps Apr 9 21:56:00.646: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05" in namespace "configmap-6951" to be "success or failure" Apr 9 21:56:00.649: INFO: Pod "pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.498491ms Apr 9 21:56:02.652: INFO: Pod "pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006330833s Apr 9 21:56:04.656: INFO: Pod "pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010210562s STEP: Saw pod success Apr 9 21:56:04.656: INFO: Pod "pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05" satisfied condition "success or failure" Apr 9 21:56:04.658: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05 container configmap-volume-test: STEP: delete the pod Apr 9 21:56:04.702: INFO: Waiting for pod pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05 to disappear Apr 9 21:56:04.715: INFO: Pod pod-configmaps-ae873645-f72c-464e-a94f-5e4f8d99aa05 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:04.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6951" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2807,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:04.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:09.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1178" for this suite. • [SLOW TEST:5.097 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":176,"skipped":2811,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:09.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 9 21:56:09.910: INFO: Waiting up to 5m0s for pod "pod-da37c36b-b96f-46ff-8389-d4e802e42388" in namespace "emptydir-8605" to be "success or failure" Apr 9 21:56:09.944: INFO: Pod "pod-da37c36b-b96f-46ff-8389-d4e802e42388": Phase="Pending", Reason="", readiness=false. Elapsed: 34.011346ms Apr 9 21:56:11.947: INFO: Pod "pod-da37c36b-b96f-46ff-8389-d4e802e42388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037869453s Apr 9 21:56:13.952: INFO: Pod "pod-da37c36b-b96f-46ff-8389-d4e802e42388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042054659s STEP: Saw pod success Apr 9 21:56:13.952: INFO: Pod "pod-da37c36b-b96f-46ff-8389-d4e802e42388" satisfied condition "success or failure" Apr 9 21:56:13.955: INFO: Trying to get logs from node jerma-worker2 pod pod-da37c36b-b96f-46ff-8389-d4e802e42388 container test-container: STEP: delete the pod Apr 9 21:56:13.987: INFO: Waiting for pod pod-da37c36b-b96f-46ff-8389-d4e802e42388 to disappear Apr 9 21:56:14.005: INFO: Pod pod-da37c36b-b96f-46ff-8389-d4e802e42388 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:14.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8605" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:14.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 9 21:56:14.086: INFO: Waiting up to 5m0s for pod "pod-370c4286-5458-46d2-bf88-561a252ba5ed" in namespace "emptydir-6080" to be "success or failure" Apr 9 21:56:14.093: INFO: Pod "pod-370c4286-5458-46d2-bf88-561a252ba5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.423635ms Apr 9 21:56:16.097: INFO: Pod "pod-370c4286-5458-46d2-bf88-561a252ba5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011553929s Apr 9 21:56:18.102: INFO: Pod "pod-370c4286-5458-46d2-bf88-561a252ba5ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015797685s STEP: Saw pod success Apr 9 21:56:18.102: INFO: Pod "pod-370c4286-5458-46d2-bf88-561a252ba5ed" satisfied condition "success or failure" Apr 9 21:56:18.105: INFO: Trying to get logs from node jerma-worker pod pod-370c4286-5458-46d2-bf88-561a252ba5ed container test-container: STEP: delete the pod Apr 9 21:56:18.147: INFO: Waiting for pod pod-370c4286-5458-46d2-bf88-561a252ba5ed to disappear Apr 9 21:56:18.173: INFO: Pod pod-370c4286-5458-46d2-bf88-561a252ba5ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:18.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6080" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:18.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 9 21:56:18.242: INFO: Waiting up to 5m0s for pod "pod-76857d46-93b1-4428-8085-8332811adbbd" in namespace "emptydir-871" to be "success or failure" Apr 9 21:56:18.246: INFO: Pod "pod-76857d46-93b1-4428-8085-8332811adbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400431ms Apr 9 21:56:20.252: INFO: Pod "pod-76857d46-93b1-4428-8085-8332811adbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009367673s Apr 9 21:56:22.256: INFO: Pod "pod-76857d46-93b1-4428-8085-8332811adbbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014121386s STEP: Saw pod success Apr 9 21:56:22.256: INFO: Pod "pod-76857d46-93b1-4428-8085-8332811adbbd" satisfied condition "success or failure" Apr 9 21:56:22.260: INFO: Trying to get logs from node jerma-worker2 pod pod-76857d46-93b1-4428-8085-8332811adbbd container test-container: STEP: delete the pod Apr 9 21:56:22.277: INFO: Waiting for pod pod-76857d46-93b1-4428-8085-8332811adbbd to disappear Apr 9 21:56:22.300: INFO: Pod pod-76857d46-93b1-4428-8085-8332811adbbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:22.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-871" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2882,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:22.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 21:56:22.351: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:28.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2618" for this suite. • [SLOW TEST:6.548 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":180,"skipped":2885,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:28.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 9 21:56:28.941: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 9 21:56:39.859: INFO: >>> kubeConfig: /root/.kube/config Apr 9 21:56:42.291: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:56:52.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7839" for this suite. • [SLOW TEST:23.972 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":181,"skipped":2890,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:56:52.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9070 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 21:56:52.869: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 9 21:57:10.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.166:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9070 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:57:10.988: INFO: >>> kubeConfig: /root/.kube/config I0409 21:57:11.024130 6 log.go:172] (0xc00156e160) (0xc000f8ebe0) Create stream I0409 21:57:11.024158 6 log.go:172] (0xc00156e160) (0xc000f8ebe0) Stream added, broadcasting: 1 I0409 21:57:11.026273 6 log.go:172] (0xc00156e160) Reply frame received for 1 I0409 21:57:11.026324 6 log.go:172] (0xc00156e160) (0xc000c980a0) Create stream I0409 21:57:11.026347 6 log.go:172] (0xc00156e160) (0xc000c980a0) Stream added, broadcasting: 3 I0409 21:57:11.027370 6 log.go:172] (0xc00156e160) Reply frame received for 3 I0409 21:57:11.027417 6 log.go:172] (0xc00156e160) (0xc000f8ee60) Create stream I0409 21:57:11.027433 6 log.go:172] (0xc00156e160) (0xc000f8ee60) Stream added, broadcasting: 5 I0409 21:57:11.028551 6 log.go:172] (0xc00156e160) Reply frame received for 5 I0409 21:57:11.119228 6 log.go:172] (0xc00156e160) Data frame received for 3 I0409 21:57:11.119256 6 log.go:172] (0xc000c980a0) (3) Data frame handling I0409 21:57:11.119273 6 log.go:172] (0xc000c980a0) (3) Data frame sent I0409 21:57:11.119283 6 log.go:172] (0xc00156e160) Data frame received for 3 I0409 21:57:11.119294 6 log.go:172] (0xc000c980a0) (3) Data frame handling I0409 21:57:11.119412 6 log.go:172] (0xc00156e160) Data frame received for 5 I0409 21:57:11.119446 6 log.go:172] (0xc000f8ee60) (5) Data frame handling I0409 21:57:11.121398 6 log.go:172] (0xc00156e160) Data frame received for 1 I0409 21:57:11.121430 6 log.go:172] (0xc000f8ebe0) (1) Data frame handling I0409 21:57:11.121455 6 log.go:172] (0xc000f8ebe0) (1) Data frame sent I0409 21:57:11.121478 6 log.go:172] (0xc00156e160) (0xc000f8ebe0) Stream removed, broadcasting: 1 I0409 21:57:11.121499 6 log.go:172] (0xc00156e160) Go away received I0409 21:57:11.121581 6 log.go:172] (0xc00156e160) (0xc000f8ebe0) Stream removed, broadcasting: 1 I0409 21:57:11.121598 6 log.go:172] (0xc00156e160) (0xc000c980a0) Stream removed, broadcasting: 3 I0409 21:57:11.121614 6 log.go:172] (0xc00156e160) (0xc000f8ee60) Stream removed, broadcasting: 5 Apr 9 21:57:11.121: INFO: Found all expected endpoints: [netserver-0] Apr 9 21:57:11.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9070 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 21:57:11.124: INFO: >>> kubeConfig: /root/.kube/config I0409 21:57:11.157028 6 log.go:172] (0xc001868000) (0xc00221da40) Create stream I0409 21:57:11.157076 6 log.go:172] (0xc001868000) (0xc00221da40) Stream added, broadcasting: 1 I0409 21:57:11.159277 6 log.go:172] (0xc001868000) Reply frame received for 1 I0409 21:57:11.159350 6 log.go:172] (0xc001868000) (0xc000c98140) Create stream I0409 21:57:11.159382 6 log.go:172] (0xc001868000) (0xc000c98140) Stream added, broadcasting: 3 I0409 21:57:11.160549 6 log.go:172] (0xc001868000) Reply frame received for 3 I0409 21:57:11.160586 6 log.go:172] (0xc001868000) (0xc000f8f860) Create stream I0409 21:57:11.160596 6 log.go:172] (0xc001868000) (0xc000f8f860) Stream added, broadcasting: 5 I0409 21:57:11.161616 6 log.go:172] (0xc001868000) Reply frame received for 5 I0409 21:57:11.224481 6 log.go:172] (0xc001868000) Data frame received for 3 I0409 21:57:11.224517 6 log.go:172] (0xc000c98140) (3) Data frame handling I0409 21:57:11.224532 6 log.go:172] (0xc000c98140) (3) Data frame sent I0409 21:57:11.224539 6 log.go:172] (0xc001868000) Data frame received for 3 I0409 21:57:11.224543 6 log.go:172] (0xc000c98140) (3) Data frame handling I0409 21:57:11.224608 6 log.go:172] (0xc001868000) Data frame received for 5 I0409 21:57:11.224651 6 log.go:172] (0xc000f8f860) (5) Data frame handling I0409 21:57:11.226217 6 log.go:172] (0xc001868000) Data frame received for 1 I0409 21:57:11.226240 6 log.go:172] (0xc00221da40) (1) Data frame handling I0409 21:57:11.226265 6 log.go:172] (0xc00221da40) (1) Data frame sent I0409 21:57:11.226287 6 log.go:172] (0xc001868000) (0xc00221da40) Stream removed, broadcasting: 1 I0409 21:57:11.226349 6 log.go:172] (0xc001868000) Go away received I0409 21:57:11.226388 6 log.go:172] (0xc001868000) (0xc00221da40) Stream removed, broadcasting: 1 I0409 21:57:11.226408 6 log.go:172] (0xc001868000) (0xc000c98140) Stream removed, broadcasting: 3 I0409 21:57:11.226424 6 log.go:172] (0xc001868000) (0xc000f8f860) Stream removed, broadcasting: 5 Apr 9 21:57:11.226: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9070" for this suite. • [SLOW TEST:18.404 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:11.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:11.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4454" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":183,"skipped":2906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:11.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3636.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3636.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3636.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3636.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 21:57:17.563: INFO: DNS probes using dns-3636/dns-test-adc32f3b-39ae-4718-940a-1c167e3fe92c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:17.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3636" for this suite. • [SLOW TEST:6.650 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":184,"skipped":2938,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:17.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-11ea4b3e-44a4-483e-9d45-0f50382c57fc STEP: Creating a pod to test consume secrets Apr 9 21:57:18.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0" in namespace "projected-3403" to be "success or failure" Apr 9 21:57:18.089: INFO: Pod "pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488615ms Apr 9 21:57:20.094: INFO: Pod "pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014922994s Apr 9 21:57:22.109: INFO: Pod "pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029925859s STEP: Saw pod success Apr 9 21:57:22.109: INFO: Pod "pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0" satisfied condition "success or failure" Apr 9 21:57:22.114: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0 container projected-secret-volume-test: STEP: delete the pod Apr 9 21:57:22.147: INFO: Waiting for pod pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0 to disappear Apr 9 21:57:22.161: INFO: Pod pod-projected-secrets-c4ffcca8-a53f-492e-b53d-6073f96082b0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:22.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3403" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:22.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 21:57:22.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2121' Apr 9 21:57:22.394: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 9 21:57:22.394: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 9 21:57:24.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2121' Apr 9 21:57:24.545: INFO: stderr: "" Apr 9 21:57:24.545: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2121" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":186,"skipped":2978,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:24.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0409 21:57:34.617410 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 21:57:34.617: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:34.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6246" for this suite. • [SLOW TEST:10.072 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":187,"skipped":2982,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:34.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5b5beb03-a83d-4b6f-be0e-ee28cfa00c22 STEP: Creating a pod to test consume secrets Apr 9 21:57:34.708: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120" in namespace "projected-2974" to be "success or failure" Apr 9 21:57:34.735: INFO: Pod "pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120": Phase="Pending", Reason="", readiness=false. Elapsed: 26.420273ms Apr 9 21:57:36.767: INFO: Pod "pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059080836s Apr 9 21:57:38.771: INFO: Pod "pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063196868s STEP: Saw pod success Apr 9 21:57:38.771: INFO: Pod "pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120" satisfied condition "success or failure" Apr 9 21:57:38.775: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120 container projected-secret-volume-test: STEP: delete the pod Apr 9 21:57:38.806: INFO: Waiting for pod pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120 to disappear Apr 9 21:57:38.833: INFO: Pod pod-projected-secrets-1d594b9d-dbd4-4e72-9838-4cc97d9f6120 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:38.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2974" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:38.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8461 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8461 to expose endpoints map[] Apr 9 21:57:38.990: INFO: Get endpoints failed (18.819445ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 9 21:57:40.018: INFO: successfully validated that service multi-endpoint-test in namespace services-8461 exposes endpoints map[] (1.047346183s elapsed) STEP: Creating pod pod1 in namespace services-8461 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8461 to expose endpoints map[pod1:[100]] Apr 9 21:57:43.059: INFO: successfully validated that service multi-endpoint-test in namespace services-8461 exposes endpoints map[pod1:[100]] (3.033858253s elapsed) STEP: Creating pod pod2 in namespace services-8461 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8461 to expose endpoints map[pod1:[100] pod2:[101]] Apr 9 21:57:47.136: INFO: successfully validated that service multi-endpoint-test in namespace services-8461 exposes endpoints map[pod1:[100] pod2:[101]] (4.072581102s elapsed) STEP: Deleting pod pod1 in namespace services-8461 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8461 to expose endpoints map[pod2:[101]] Apr 9 21:57:48.165: INFO: successfully validated that service multi-endpoint-test in namespace services-8461 exposes endpoints map[pod2:[101]] (1.024741605s elapsed) STEP: Deleting pod pod2 in namespace services-8461 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8461 to expose endpoints map[] Apr 9 21:57:49.180: INFO: successfully validated that service multi-endpoint-test in namespace services-8461 exposes endpoints map[] (1.010941232s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:49.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8461" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.401 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":189,"skipped":3035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:49.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 9 21:57:54.377: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:57:54.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9434" for this suite. • [SLOW TEST:5.220 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":190,"skipped":3086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:57:54.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d053ecab-94f6-494f-ade7-53ac7b025498 STEP: Creating secret with name s-test-opt-upd-5c787792-43d3-46c7-98dc-b8a7f67dd2ec STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d053ecab-94f6-494f-ade7-53ac7b025498 STEP: Updating secret s-test-opt-upd-5c787792-43d3-46c7-98dc-b8a7f67dd2ec STEP: Creating secret with name s-test-opt-create-78ce8035-0d82-43a1-b7ee-7a71a1d01e7c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:59:23.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5971" for this suite. • [SLOW TEST:89.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3109,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:59:23.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-cx2r STEP: Creating a pod to test atomic-volume-subpath Apr 9 21:59:23.801: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cx2r" in namespace "subpath-6207" to be "success or failure" Apr 9 21:59:23.808: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778514ms Apr 9 21:59:25.812: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010584213s Apr 9 21:59:27.816: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 4.014680027s Apr 9 21:59:29.819: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 6.018414299s Apr 9 21:59:31.836: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.035338556s Apr 9 21:59:33.840: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.039209938s Apr 9 21:59:35.844: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 12.042965619s Apr 9 21:59:37.848: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 14.04739443s Apr 9 21:59:39.852: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 16.050878641s Apr 9 21:59:41.855: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 18.054279512s Apr 9 21:59:43.860: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 20.058842261s Apr 9 21:59:45.864: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Running", Reason="", readiness=true. Elapsed: 22.062834995s Apr 9 21:59:47.867: INFO: Pod "pod-subpath-test-secret-cx2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065893883s STEP: Saw pod success Apr 9 21:59:47.867: INFO: Pod "pod-subpath-test-secret-cx2r" satisfied condition "success or failure" Apr 9 21:59:47.869: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-cx2r container test-container-subpath-secret-cx2r: STEP: delete the pod Apr 9 21:59:47.912: INFO: Waiting for pod pod-subpath-test-secret-cx2r to disappear Apr 9 21:59:47.927: INFO: Pod pod-subpath-test-secret-cx2r no longer exists STEP: Deleting pod pod-subpath-test-secret-cx2r Apr 9 21:59:47.927: INFO: Deleting pod "pod-subpath-test-secret-cx2r" in namespace "subpath-6207" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:59:47.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6207" for this suite. • [SLOW TEST:24.234 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":192,"skipped":3117,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:59:47.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 9 21:59:52.582: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9000 pod-service-account-d2e6e621-de39-4bed-80cd-b30d35350e66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 9 21:59:52.819: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9000 pod-service-account-d2e6e621-de39-4bed-80cd-b30d35350e66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 9 21:59:53.036: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9000 pod-service-account-d2e6e621-de39-4bed-80cd-b30d35350e66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:59:53.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9000" for this suite. • [SLOW TEST:5.330 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":193,"skipped":3137,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:59:53.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 21:59:53.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae" in namespace "downward-api-7321" to be "success or failure" Apr 9 21:59:53.374: INFO: Pod "downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 48.951539ms Apr 9 21:59:55.386: INFO: Pod "downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060871032s Apr 9 21:59:57.390: INFO: Pod "downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064790479s STEP: Saw pod success Apr 9 21:59:57.390: INFO: Pod "downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae" satisfied condition "success or failure" Apr 9 21:59:57.392: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae container client-container: STEP: delete the pod Apr 9 21:59:57.420: INFO: Waiting for pod downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae to disappear Apr 9 21:59:57.431: INFO: Pod downwardapi-volume-a70136ad-c68d-4828-95cb-bbec744db8ae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 21:59:57.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7321" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3147,"failed":0} S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 21:59:57.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 9 21:59:57.502: INFO: Waiting up to 5m0s for pod "downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e" in namespace "downward-api-4671" to be "success or failure" Apr 9 21:59:57.509: INFO: Pod "downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.726208ms Apr 9 21:59:59.513: INFO: Pod "downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011241258s Apr 9 22:00:01.517: INFO: Pod "downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015704101s STEP: Saw pod success Apr 9 22:00:01.517: INFO: Pod "downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e" satisfied condition "success or failure" Apr 9 22:00:01.521: INFO: Trying to get logs from node jerma-worker2 pod downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e container dapi-container: STEP: delete the pod Apr 9 22:00:01.547: INFO: Waiting for pod downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e to disappear Apr 9 22:00:01.552: INFO: Pod downward-api-76a9bd74-c672-482f-bcf0-ec7abf5edb6e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:00:01.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4671" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:00:01.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 9 22:00:01.607: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 22:00:01.636: INFO: Waiting for terminating namespaces to be deleted... Apr 9 22:00:01.639: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 9 22:00:01.644: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.644: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:00:01.644: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.644: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:00:01.644: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 9 22:00:01.651: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.651: INFO: Container kube-hunter ready: false, restart count 0 Apr 9 22:00:01.651: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.651: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:00:01.651: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.651: INFO: Container kube-bench ready: false, restart count 0 Apr 9 22:00:01.651: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:01.651: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160445b27e5696e4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:00:02.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7185" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":196,"skipped":3186,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:00:02.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 9 22:00:10.824: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 22:00:10.841: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 22:00:12.841: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 22:00:12.845: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 22:00:14.841: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 22:00:14.846: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:00:14.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8140" for this suite. • [SLOW TEST:12.163 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3191,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:00:14.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:00:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4247" for this suite. STEP: Destroying namespace "nsdeletetest-6566" for this suite. Apr 9 22:00:46.080: INFO: Namespace nsdeletetest-6566 was already deleted STEP: Destroying namespace "nsdeletetest-794" for this suite. • [SLOW TEST:31.230 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":198,"skipped":3206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:00:46.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 9 22:00:46.123: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 22:00:46.171: INFO: Waiting for terminating namespaces to be deleted... Apr 9 22:00:46.174: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 9 22:00:46.179: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.179: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:00:46.179: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.179: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:00:46.179: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 9 22:00:46.184: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.184: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:00:46.184: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.184: INFO: Container kube-bench ready: false, restart count 0 Apr 9 22:00:46.184: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.184: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:00:46.184: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 9 22:00:46.184: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 9 22:00:46.334: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 9 22:00:46.335: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 9 22:00:46.335: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 9 22:00:46.335: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 9 22:00:46.335: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 9 22:00:46.340: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5.160445bce58efd8d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7186/filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5.160445bd32840733], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5.160445bd74eccab6], Reason = [Created], Message = [Created container filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5.160445bd88d6b555], Reason = [Started], Message = [Started container filler-pod-1ea6b8e1-90e5-4c26-84f8-b1bbae1e29e5] STEP: Considering event: Type = [Normal], Name = [filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57.160445bce76f628e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7186/filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57.160445bd6ec64da1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57.160445bd9915f20c], Reason = [Created], Message = [Created container filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57] STEP: Considering event: Type = [Normal], Name = [filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57.160445bda90a907f], Reason = [Started], Message = [Started container filler-pod-a15b3245-0cb2-44f6-99a7-6beb381bbf57] STEP: Considering event: Type = [Warning], Name = [additional-pod.160445bdd6b3212e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:00:51.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7186" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.613 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":199,"skipped":3231,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:00:51.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:00:51.828: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 9 22:00:51.889: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 9 22:00:56.895: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 9 22:00:56.895: INFO: Creating deployment "test-rolling-update-deployment" Apr 9 22:00:56.899: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 9 22:00:56.910: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 9 22:00:58.917: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 9 22:00:58.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066457, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066457, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066457, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066456, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:01:00.923: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 9 22:01:00.931: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4239 /apis/apps/v1/namespaces/deployment-4239/deployments/test-rolling-update-deployment 9faff226-ef54-4246-9d4e-d69175f3ab14 6785607 1 2020-04-09 22:00:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005338958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-09 22:00:57 +0000 UTC,LastTransitionTime:2020-04-09 22:00:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-09 22:01:00 +0000 UTC,LastTransitionTime:2020-04-09 22:00:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 9 22:01:00.934: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4239 /apis/apps/v1/namespaces/deployment-4239/replicasets/test-rolling-update-deployment-67cf4f6444 cbaecb8f-6203-4099-ba0a-7e5da33ff319 6785596 1 2020-04-09 22:00:56 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9faff226-ef54-4246-9d4e-d69175f3ab14 0xc005368ac7 0xc005368ac8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005368b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:01:00.934: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 9 22:01:00.934: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4239 /apis/apps/v1/namespaces/deployment-4239/replicasets/test-rolling-update-controller a649f451-e6a3-453b-b4d7-e224f710a125 6785605 2 2020-04-09 22:00:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9faff226-ef54-4246-9d4e-d69175f3ab14 0xc0053689f7 0xc0053689f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005368a58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:01:00.937: INFO: Pod "test-rolling-update-deployment-67cf4f6444-c4nfk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-c4nfk test-rolling-update-deployment-67cf4f6444- deployment-4239 /api/v1/namespaces/deployment-4239/pods/test-rolling-update-deployment-67cf4f6444-c4nfk 4da512d8-0c62-430d-ad9f-e6dba7b55aa2 6785595 0 2020-04-09 22:00:56 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 cbaecb8f-6203-4099-ba0a-7e5da33ff319 0xc005368f87 0xc005368f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xjprf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xjprf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xjprf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:00:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:00:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:00:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:00:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.16,StartTime:2020-04-09 22:00:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:00:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7a147b2dd30941c8afe489cb97610246671e13771e940d4877a90f2037a9d7dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:01:00.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4239" for this suite. • [SLOW TEST:9.244 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":200,"skipped":3238,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:01:00.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3412, will wait for the garbage collector to delete the pods Apr 9 22:01:05.076: INFO: Deleting Job.batch foo took: 9.952267ms Apr 9 22:01:05.376: INFO: Terminating Job.batch foo pods took: 300.363728ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:01:49.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3412" for this suite. • [SLOW TEST:48.346 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":201,"skipped":3245,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:01:49.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:01:49.370: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064" in namespace "projected-8232" to be "success or failure" Apr 9 22:01:49.374: INFO: Pod "downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064": Phase="Pending", Reason="", readiness=false. Elapsed: 3.736186ms Apr 9 22:01:51.399: INFO: Pod "downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029455199s Apr 9 22:01:53.404: INFO: Pod "downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034047821s STEP: Saw pod success Apr 9 22:01:53.404: INFO: Pod "downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064" satisfied condition "success or failure" Apr 9 22:01:53.407: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064 container client-container: STEP: delete the pod Apr 9 22:01:53.437: INFO: Waiting for pod downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064 to disappear Apr 9 22:01:53.452: INFO: Pod downwardapi-volume-69ffeb99-f3b7-4f46-9b2a-9b1f4c879064 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:01:53.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8232" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:01:53.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 22:01:53.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1162' Apr 9 22:01:53.618: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 9 22:01:53.618: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 9 22:01:53.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1162' Apr 9 22:01:53.719: INFO: stderr: "" Apr 9 22:01:53.719: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:01:53.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1162" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":203,"skipped":3276,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:01:53.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:01:54.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:01:56.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066514, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066514, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066514, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066514, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:01:59.882: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:02:00.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6722" for this suite. STEP: Destroying namespace "webhook-6722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.675 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":204,"skipped":3294,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:02:00.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:02:00.519: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:02:04.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4452" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3312,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:02:04.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:02:04.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-190" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":206,"skipped":3328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:02:04.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-0258697c-ef19-4309-9c99-57715c50bda6 in namespace container-probe-9053 Apr 9 22:02:08.757: INFO: Started pod busybox-0258697c-ef19-4309-9c99-57715c50bda6 in namespace container-probe-9053 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 22:02:08.760: INFO: Initial restart count of pod busybox-0258697c-ef19-4309-9c99-57715c50bda6 is 0 Apr 9 22:02:56.902: INFO: Restart count of pod container-probe-9053/busybox-0258697c-ef19-4309-9c99-57715c50bda6 is now 1 (48.142179364s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:02:56.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9053" for this suite. • [SLOW TEST:52.261 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3359,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:02:56.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8554.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8554.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8554.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8554.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 22:03:03.082: INFO: DNS probes using dns-8554/dns-test-fca4b923-7255-482a-acd2-104ad0a3797d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:03:03.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8554" for this suite. • [SLOW TEST:6.193 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":208,"skipped":3359,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:03:03.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 22:03:03.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9188' Apr 9 22:03:03.585: INFO: stderr: "" Apr 9 22:03:03.585: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 9 22:03:08.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9188 -o json' Apr 9 22:03:08.734: INFO: stderr: "" Apr 9 22:03:08.734: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-09T22:03:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9188\",\n \"resourceVersion\": \"6786281\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9188/pods/e2e-test-httpd-pod\",\n \"uid\": \"fee42bf7-8153-4a39-a634-42d60ea32c0f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cw84c\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cw84c\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cw84c\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T22:03:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T22:03:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T22:03:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T22:03:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://aa27d8f83034eb8ca1b84a453085ee84c53fad0025fb92a6be40ee0ad02ce2e6\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-09T22:03:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.181\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.181\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-09T22:03:03Z\"\n }\n}\n" STEP: replace the image in the pod Apr 9 22:03:08.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9188' Apr 9 22:03:09.025: INFO: stderr: "" Apr 9 22:03:09.025: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 9 22:03:09.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9188' Apr 9 22:03:19.243: INFO: stderr: "" Apr 9 22:03:19.243: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:03:19.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9188" for this suite. • [SLOW TEST:16.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":209,"skipped":3361,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:03:19.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 22:03:25.350: INFO: DNS probes using dns-test-f0de3d84-8384-462a-a26d-25b934ad9b3e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 22:03:31.472: INFO: File wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:31.477: INFO: File jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:31.477: INFO: Lookups using dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 failed for: [wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local] Apr 9 22:03:36.481: INFO: File wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:36.484: INFO: File jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:36.485: INFO: Lookups using dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 failed for: [wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local] Apr 9 22:03:41.482: INFO: File wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:41.486: INFO: File jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:41.486: INFO: Lookups using dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 failed for: [wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local] Apr 9 22:03:46.487: INFO: File wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:46.491: INFO: File jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:46.491: INFO: Lookups using dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 failed for: [wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local] Apr 9 22:03:51.482: INFO: File wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:51.486: INFO: File jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local from pod dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 9 22:03:51.486: INFO: Lookups using dns-1015/dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 failed for: [wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local] Apr 9 22:03:56.485: INFO: DNS probes using dns-test-4ffd23e8-a9ef-4f39-8346-dee7347e73e8 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1015.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1015.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 22:04:02.984: INFO: DNS probes using dns-test-1393aec2-194d-4352-ba4a-25c7429ff20b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:04:03.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1015" for this suite. • [SLOW TEST:43.851 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":210,"skipped":3367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:04:03.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:04:03.516: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6386a0ba-bfd6-42af-8c42-1ad47de461ed" in namespace "security-context-test-6819" to be "success or failure" Apr 9 22:04:03.523: INFO: Pod "alpine-nnp-false-6386a0ba-bfd6-42af-8c42-1ad47de461ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175934ms Apr 9 22:04:05.528: INFO: Pod "alpine-nnp-false-6386a0ba-bfd6-42af-8c42-1ad47de461ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011628191s Apr 9 22:04:07.532: INFO: Pod "alpine-nnp-false-6386a0ba-bfd6-42af-8c42-1ad47de461ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015192533s Apr 9 22:04:07.532: INFO: Pod "alpine-nnp-false-6386a0ba-bfd6-42af-8c42-1ad47de461ed" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:04:07.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6819" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:04:07.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8231 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 9 22:04:07.666: INFO: Found 0 stateful pods, waiting for 3 Apr 9 22:04:17.670: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:04:17.670: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:04:17.670: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 9 22:04:27.671: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:04:27.671: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:04:27.671: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 9 22:04:27.698: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 9 22:04:37.764: INFO: Updating stateful set ss2 Apr 9 22:04:37.805: INFO: Waiting for Pod statefulset-8231/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 9 22:04:47.812: INFO: Waiting for Pod statefulset-8231/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 9 22:04:58.103: INFO: Found 2 stateful pods, waiting for 3 Apr 9 22:05:08.108: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:05:08.108: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 22:05:08.108: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 9 22:05:08.130: INFO: Updating stateful set ss2 Apr 9 22:05:08.176: INFO: Waiting for Pod statefulset-8231/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 9 22:05:18.184: INFO: Waiting for Pod statefulset-8231/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 9 22:05:28.201: INFO: Updating stateful set ss2 Apr 9 22:05:28.213: INFO: Waiting for StatefulSet statefulset-8231/ss2 to complete update Apr 9 22:05:28.213: INFO: Waiting for Pod statefulset-8231/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 9 22:05:38.221: INFO: Waiting for StatefulSet statefulset-8231/ss2 to complete update Apr 9 22:05:38.221: INFO: Waiting for Pod statefulset-8231/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 9 22:05:48.221: INFO: Deleting all statefulset in ns statefulset-8231 Apr 9 22:05:48.224: INFO: Scaling statefulset ss2 to 0 Apr 9 22:06:18.241: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 22:06:18.244: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:18.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8231" for this suite. • [SLOW TEST:130.700 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":212,"skipped":3472,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:18.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d31eb6a1-c333-4733-beed-180a1fc71e05 STEP: Creating configMap with name cm-test-opt-upd-592b323b-0518-4774-a040-6dc82391761c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d31eb6a1-c333-4733-beed-180a1fc71e05 STEP: Updating configmap cm-test-opt-upd-592b323b-0518-4774-a040-6dc82391761c STEP: Creating configMap with name cm-test-opt-create-9655841b-a64b-4478-824e-192889c60e11 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:26.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2836" for this suite. • [SLOW TEST:8.193 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:26.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 9 22:06:26.531: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5466" to be "success or failure" Apr 9 22:06:26.537: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.355745ms Apr 9 22:06:28.541: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010104596s Apr 9 22:06:30.546: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014400327s Apr 9 22:06:32.550: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018733142s STEP: Saw pod success Apr 9 22:06:32.550: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 9 22:06:32.556: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 9 22:06:32.651: INFO: Waiting for pod pod-host-path-test to disappear Apr 9 22:06:32.705: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:32.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5466" for this suite. • [SLOW TEST:6.253 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3515,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:32.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0409 22:06:33.941748 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 22:06:33.941: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:33.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1488" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":215,"skipped":3520,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:33.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:06:34.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3" in namespace "downward-api-7193" to be "success or failure" Apr 9 22:06:34.119: INFO: Pod "downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.474176ms Apr 9 22:06:36.167: INFO: Pod "downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072108721s Apr 9 22:06:38.188: INFO: Pod "downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093357508s STEP: Saw pod success Apr 9 22:06:38.188: INFO: Pod "downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3" satisfied condition "success or failure" Apr 9 22:06:38.192: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3 container client-container: STEP: delete the pod Apr 9 22:06:38.215: INFO: Waiting for pod downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3 to disappear Apr 9 22:06:38.220: INFO: Pod downwardapi-volume-c65601fc-7b18-41cb-bcc6-1c4e2f67d2e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:38.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7193" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3528,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:38.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:06:38.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210" in namespace "downward-api-9370" to be "success or failure" Apr 9 22:06:38.344: INFO: Pod "downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210": Phase="Pending", Reason="", readiness=false. Elapsed: 35.14196ms Apr 9 22:06:40.347: INFO: Pod "downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037765513s Apr 9 22:06:42.350: INFO: Pod "downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040916135s STEP: Saw pod success Apr 9 22:06:42.350: INFO: Pod "downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210" satisfied condition "success or failure" Apr 9 22:06:42.352: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210 container client-container: STEP: delete the pod Apr 9 22:06:42.393: INFO: Waiting for pod downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210 to disappear Apr 9 22:06:42.405: INFO: Pod downwardapi-volume-2fd80b94-8493-450d-97b0-943d870ee210 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:42.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9370" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3530,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:42.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:06:58.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1603" for this suite. • [SLOW TEST:16.255 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":218,"skipped":3534,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:06:58.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-420" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3534,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:02.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:07:03.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:07:05.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066823, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066823, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066823, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066823, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:07:08.460: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 9 22:07:08.484: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:08.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8780" for this suite. STEP: Destroying namespace "webhook-8780-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.838 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":220,"skipped":3538,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:08.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 9 22:07:11.697: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:11.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6148" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3545,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:11.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:07:11.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0" in namespace "projected-3727" to be "success or failure" Apr 9 22:07:11.891: INFO: Pod "downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834854ms Apr 9 22:07:13.894: INFO: Pod "downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006647493s Apr 9 22:07:15.898: INFO: Pod "downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010799821s STEP: Saw pod success Apr 9 22:07:15.898: INFO: Pod "downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0" satisfied condition "success or failure" Apr 9 22:07:15.901: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0 container client-container: STEP: delete the pod Apr 9 22:07:15.917: INFO: Waiting for pod downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0 to disappear Apr 9 22:07:15.921: INFO: Pod downwardapi-volume-96f9ca40-fe74-495b-9a8d-585738d1f1d0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3727" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:15.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:07:16.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:07:18.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066836, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066836, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066836, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066836, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:07:21.718: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 9 22:07:25.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4841 to-be-attached-pod -i -c=container1' Apr 9 22:07:28.334: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:28.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4841" for this suite. STEP: Destroying namespace "webhook-4841-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.517 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":223,"skipped":3583,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:28.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4811/configmap-test-92ecdc66-45db-48ec-81c4-55a6c242e133 STEP: Creating a pod to test consume configMaps Apr 9 22:07:28.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3" in namespace "configmap-4811" to be "success or failure" Apr 9 22:07:28.563: INFO: Pod "pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060231ms Apr 9 22:07:30.568: INFO: Pod "pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455157s Apr 9 22:07:32.571: INFO: Pod "pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012070609s STEP: Saw pod success Apr 9 22:07:32.571: INFO: Pod "pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3" satisfied condition "success or failure" Apr 9 22:07:32.574: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3 container env-test: STEP: delete the pod Apr 9 22:07:32.595: INFO: Waiting for pod pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3 to disappear Apr 9 22:07:32.625: INFO: Pod pod-configmaps-449ca7bb-e3fb-47cd-90a0-73244fe7ebb3 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:32.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4811" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3603,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:32.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0409 22:07:44.953103 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 22:07:44.953: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:44.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6273" for this suite. • [SLOW TEST:12.305 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":225,"skipped":3615,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:44.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:07:45.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9" in namespace "downward-api-6915" to be "success or failure" Apr 9 22:07:45.055: INFO: Pod "downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.694734ms Apr 9 22:07:47.059: INFO: Pod "downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023190461s Apr 9 22:07:49.062: INFO: Pod "downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026417907s STEP: Saw pod success Apr 9 22:07:49.062: INFO: Pod "downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9" satisfied condition "success or failure" Apr 9 22:07:49.065: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9 container client-container: STEP: delete the pod Apr 9 22:07:49.079: INFO: Waiting for pod downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9 to disappear Apr 9 22:07:49.084: INFO: Pod downwardapi-volume-44d96a9d-efb5-4e32-9fae-50ffd7f69ae9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:49.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6915" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:49.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:07:49.727: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:07:51.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:07:53.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066869, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:07:57.004: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:07:57.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1348" for this suite. STEP: Destroying namespace "webhook-1348-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.185 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":227,"skipped":3652,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:07:57.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 9 22:08:00.544: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:00.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6270" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3662,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:00.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3727 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3727 STEP: creating replication controller externalsvc in namespace services-3727 I0409 22:08:00.842300 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3727, replica count: 2 I0409 22:08:03.892733 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 22:08:06.892991 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 9 22:08:07.040: INFO: Creating new exec pod Apr 9 22:08:11.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3727 execpod9qz84 -- /bin/sh -x -c nslookup clusterip-service' Apr 9 22:08:11.432: INFO: stderr: "I0409 22:08:11.326512 3287 log.go:172] (0xc000598d10) (0xc000a2c000) Create stream\nI0409 22:08:11.326600 3287 log.go:172] (0xc000598d10) (0xc000a2c000) Stream added, broadcasting: 1\nI0409 22:08:11.329321 3287 log.go:172] (0xc000598d10) Reply frame received for 1\nI0409 22:08:11.329373 3287 log.go:172] (0xc000598d10) (0xc000a2c0a0) Create stream\nI0409 22:08:11.329403 3287 log.go:172] (0xc000598d10) (0xc000a2c0a0) Stream added, broadcasting: 3\nI0409 22:08:11.330338 3287 log.go:172] (0xc000598d10) Reply frame received for 3\nI0409 22:08:11.330382 3287 log.go:172] (0xc000598d10) (0xc0006efae0) Create stream\nI0409 22:08:11.330396 3287 log.go:172] (0xc000598d10) (0xc0006efae0) Stream added, broadcasting: 5\nI0409 22:08:11.331284 3287 log.go:172] (0xc000598d10) Reply frame received for 5\nI0409 22:08:11.417903 3287 log.go:172] (0xc000598d10) Data frame received for 5\nI0409 22:08:11.417938 3287 log.go:172] (0xc0006efae0) (5) Data frame handling\nI0409 22:08:11.417961 3287 log.go:172] (0xc0006efae0) (5) Data frame sent\n+ nslookup clusterip-service\nI0409 22:08:11.422958 3287 log.go:172] (0xc000598d10) Data frame received for 3\nI0409 22:08:11.422985 3287 log.go:172] (0xc000a2c0a0) (3) Data frame handling\nI0409 22:08:11.423012 3287 log.go:172] (0xc000a2c0a0) (3) Data frame sent\nI0409 22:08:11.424075 3287 log.go:172] (0xc000598d10) Data frame received for 3\nI0409 22:08:11.424100 3287 log.go:172] (0xc000a2c0a0) (3) Data frame handling\nI0409 22:08:11.424121 3287 log.go:172] (0xc000a2c0a0) (3) Data frame sent\nI0409 22:08:11.424803 3287 log.go:172] (0xc000598d10) Data frame received for 5\nI0409 22:08:11.424822 3287 log.go:172] (0xc0006efae0) (5) Data frame handling\nI0409 22:08:11.424868 3287 log.go:172] (0xc000598d10) Data frame received for 3\nI0409 22:08:11.424922 3287 log.go:172] (0xc000a2c0a0) (3) Data frame handling\nI0409 22:08:11.426889 3287 log.go:172] (0xc000598d10) Data frame received for 1\nI0409 22:08:11.426913 3287 log.go:172] (0xc000a2c000) (1) Data frame handling\nI0409 22:08:11.426927 3287 log.go:172] (0xc000a2c000) (1) Data frame sent\nI0409 22:08:11.426946 3287 log.go:172] (0xc000598d10) (0xc000a2c000) Stream removed, broadcasting: 1\nI0409 22:08:11.426969 3287 log.go:172] (0xc000598d10) Go away received\nI0409 22:08:11.427469 3287 log.go:172] (0xc000598d10) (0xc000a2c000) Stream removed, broadcasting: 1\nI0409 22:08:11.427491 3287 log.go:172] (0xc000598d10) (0xc000a2c0a0) Stream removed, broadcasting: 3\nI0409 22:08:11.427503 3287 log.go:172] (0xc000598d10) (0xc0006efae0) Stream removed, broadcasting: 5\n" Apr 9 22:08:11.432: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3727.svc.cluster.local\tcanonical name = externalsvc.services-3727.svc.cluster.local.\nName:\texternalsvc.services-3727.svc.cluster.local\nAddress: 10.102.80.155\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3727, will wait for the garbage collector to delete the pods Apr 9 22:08:11.492: INFO: Deleting ReplicationController externalsvc took: 6.053789ms Apr 9 22:08:11.792: INFO: Terminating ReplicationController externalsvc pods took: 300.266256ms Apr 9 22:08:19.316: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:19.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3727" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.719 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":229,"skipped":3677,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:19.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 9 22:08:19.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4333' Apr 9 22:08:19.608: INFO: stderr: "" Apr 9 22:08:19.608: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 9 22:08:20.613: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 22:08:20.613: INFO: Found 0 / 1 Apr 9 22:08:21.612: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 22:08:21.612: INFO: Found 0 / 1 Apr 9 22:08:22.612: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 22:08:22.612: INFO: Found 1 / 1 Apr 9 22:08:22.612: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 9 22:08:22.616: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 22:08:22.616: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 9 22:08:22.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-pc482 --namespace=kubectl-4333 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 9 22:08:22.715: INFO: stderr: "" Apr 9 22:08:22.715: INFO: stdout: "pod/agnhost-master-pc482 patched\n" STEP: checking annotations Apr 9 22:08:22.718: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 22:08:22.718: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:22.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4333" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":230,"skipped":3682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:22.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fmmh4 in namespace proxy-568 I0409 22:08:22.840381 6 runners.go:189] Created replication controller with name: proxy-service-fmmh4, namespace: proxy-568, replica count: 1 I0409 22:08:23.890913 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 22:08:24.891152 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 22:08:25.891375 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 22:08:26.891591 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:27.891762 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:28.891963 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:29.892211 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:30.892471 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:31.892743 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:32.893013 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 22:08:33.893390 6 runners.go:189] proxy-service-fmmh4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 22:08:33.897: INFO: setup took 11.133287863s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 9 22:08:33.910: INFO: (0) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 13.40693ms) Apr 9 22:08:33.910: INFO: (0) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 13.324658ms) Apr 9 22:08:33.910: INFO: (0) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 3.866834ms) Apr 9 22:08:33.925: INFO: (1) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.86667ms) Apr 9 22:08:33.925: INFO: (1) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.070313ms) Apr 9 22:08:33.925: INFO: (1) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: t... (200; 5.342451ms) Apr 9 22:08:33.926: INFO: (1) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 2.52199ms) Apr 9 22:08:33.932: INFO: (2) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 3.743303ms) Apr 9 22:08:33.932: INFO: (2) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.939597ms) Apr 9 22:08:33.932: INFO: (2) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: testt... (200; 5.178007ms) Apr 9 22:08:33.939: INFO: (3) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 5.003976ms) Apr 9 22:08:33.939: INFO: (3) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 5.05801ms) Apr 9 22:08:33.939: INFO: (3) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 3.636406ms) Apr 9 22:08:33.944: INFO: (4) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: testt... (200; 3.852902ms) Apr 9 22:08:33.944: INFO: (4) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 3.900051ms) Apr 9 22:08:33.944: INFO: (4) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 4.033416ms) Apr 9 22:08:33.945: INFO: (4) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 4.746029ms) Apr 9 22:08:33.945: INFO: (4) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 5.02179ms) Apr 9 22:08:33.945: INFO: (4) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 4.944217ms) Apr 9 22:08:33.945: INFO: (4) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 5.212444ms) Apr 9 22:08:33.948: INFO: (5) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 2.509513ms) Apr 9 22:08:33.948: INFO: (5) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 2.641934ms) Apr 9 22:08:33.948: INFO: (5) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 2.505891ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 4.309029ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 4.677463ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 4.771216ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.705208ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 4.703237ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 4.846827ms) Apr 9 22:08:33.950: INFO: (5) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtesttest (200; 3.700532ms) Apr 9 22:08:33.955: INFO: (6) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: t... (200; 4.0263ms) Apr 9 22:08:33.955: INFO: (6) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 4.081002ms) Apr 9 22:08:33.955: INFO: (6) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 4.156961ms) Apr 9 22:08:33.955: INFO: (6) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 4.081382ms) Apr 9 22:08:33.955: INFO: (6) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.276392ms) Apr 9 22:08:33.956: INFO: (6) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 5.128395ms) Apr 9 22:08:33.956: INFO: (6) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 5.167536ms) Apr 9 22:08:33.956: INFO: (6) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 5.234305ms) Apr 9 22:08:33.956: INFO: (6) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 5.269466ms) Apr 9 22:08:33.956: INFO: (6) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 5.27585ms) Apr 9 22:08:33.959: INFO: (7) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 2.246894ms) Apr 9 22:08:33.960: INFO: (7) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 3.463784ms) Apr 9 22:08:33.960: INFO: (7) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 3.862704ms) Apr 9 22:08:33.960: INFO: (7) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.882556ms) Apr 9 22:08:33.960: INFO: (7) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 3.847669ms) Apr 9 22:08:33.960: INFO: (7) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: test (200; 3.142645ms) Apr 9 22:08:33.965: INFO: (8) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 3.103194ms) Apr 9 22:08:33.965: INFO: (8) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.231953ms) Apr 9 22:08:33.965: INFO: (8) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testt... (200; 3.847309ms) Apr 9 22:08:33.966: INFO: (8) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: test (200; 4.227027ms) Apr 9 22:08:33.971: INFO: (9) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 4.283572ms) Apr 9 22:08:33.971: INFO: (9) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.380107ms) Apr 9 22:08:33.971: INFO: (9) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.402955ms) Apr 9 22:08:33.971: INFO: (9) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: testt... (200; 2.989165ms) Apr 9 22:08:33.976: INFO: (10) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 4.419401ms) Apr 9 22:08:33.976: INFO: (10) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.367886ms) Apr 9 22:08:33.976: INFO: (10) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.512952ms) Apr 9 22:08:33.976: INFO: (10) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 4.639501ms) Apr 9 22:08:33.977: INFO: (10) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 5.266467ms) Apr 9 22:08:33.977: INFO: (10) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 4.2116ms) Apr 9 22:08:33.983: INFO: (11) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.324833ms) Apr 9 22:08:33.983: INFO: (11) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testt... (200; 4.870792ms) Apr 9 22:08:33.983: INFO: (11) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: t... (200; 4.474097ms) Apr 9 22:08:33.989: INFO: (12) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtest (200; 4.680156ms) Apr 9 22:08:33.989: INFO: (12) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.760609ms) Apr 9 22:08:33.989: INFO: (12) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 4.762811ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 5.023928ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 5.192886ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 5.224239ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 5.731398ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 5.789671ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 5.790354ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 5.873932ms) Apr 9 22:08:33.990: INFO: (12) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 5.773422ms) Apr 9 22:08:33.992: INFO: (13) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 1.728293ms) Apr 9 22:08:33.994: INFO: (13) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.193613ms) Apr 9 22:08:33.994: INFO: (13) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 3.610425ms) Apr 9 22:08:33.994: INFO: (13) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 3.587233ms) Apr 9 22:08:33.994: INFO: (13) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testt... (200; 4.132833ms) Apr 9 22:08:33.995: INFO: (13) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.194955ms) Apr 9 22:08:33.995: INFO: (13) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: t... (200; 2.339417ms) Apr 9 22:08:33.998: INFO: (14) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 2.904797ms) Apr 9 22:08:33.998: INFO: (14) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 3.044516ms) Apr 9 22:08:33.999: INFO: (14) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 3.120661ms) Apr 9 22:08:33.999: INFO: (14) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: testt... (200; 3.72529ms) Apr 9 22:08:34.005: INFO: (15) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 3.916244ms) Apr 9 22:08:34.005: INFO: (15) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.044636ms) Apr 9 22:08:34.005: INFO: (15) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 4.212497ms) Apr 9 22:08:34.005: INFO: (15) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 4.247072ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 4.590425ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 4.544501ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 4.539474ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.639746ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.784668ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 4.786353ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 4.982248ms) Apr 9 22:08:34.006: INFO: (15) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testt... (200; 2.25927ms) Apr 9 22:08:34.009: INFO: (16) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 2.506548ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: test (200; 4.613312ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.590197ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 4.687549ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 4.660797ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 4.657589ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 4.626847ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 4.669556ms) Apr 9 22:08:34.012: INFO: (16) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:1080/proxy/: testtesttest (200; 4.658559ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.653581ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 4.681995ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 4.650041ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 4.943416ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 5.015743ms) Apr 9 22:08:34.017: INFO: (17) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname2/proxy/: bar (200; 5.020651ms) Apr 9 22:08:34.018: INFO: (17) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 5.2579ms) Apr 9 22:08:34.018: INFO: (17) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 5.308484ms) Apr 9 22:08:34.020: INFO: (18) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 1.98762ms) Apr 9 22:08:34.020: INFO: (18) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 2.03823ms) Apr 9 22:08:34.021: INFO: (18) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 3.631486ms) Apr 9 22:08:34.021: INFO: (18) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 3.724457ms) Apr 9 22:08:34.022: INFO: (18) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: testtest (200; 4.504008ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 4.679139ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 5.05836ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname1/proxy/: foo (200; 5.142866ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/services/proxy-service-fmmh4:portname2/proxy/: bar (200; 5.535248ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname2/proxy/: tls qux (200; 5.635141ms) Apr 9 22:08:34.023: INFO: (18) /api/v1/namespaces/proxy-568/services/https:proxy-service-fmmh4:tlsportname1/proxy/: tls baz (200; 5.614118ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 3.670796ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/services/http:proxy-service-fmmh4:portname1/proxy/: foo (200; 3.712061ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:460/proxy/: tls baz (200; 3.659325ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:462/proxy/: tls qux (200; 3.705386ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:162/proxy/: bar (200; 3.614559ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/http:proxy-service-fmmh4-rhzhv:1080/proxy/: t... (200; 3.700205ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv/proxy/: test (200; 3.737264ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/proxy-service-fmmh4-rhzhv:160/proxy/: foo (200; 3.743772ms) Apr 9 22:08:34.027: INFO: (19) /api/v1/namespaces/proxy-568/pods/https:proxy-service-fmmh4-rhzhv:443/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 9 22:08:39.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff" in namespace "projected-8305" to be "success or failure" Apr 9 22:08:39.699: INFO: Pod "downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194732ms Apr 9 22:08:41.711: INFO: Pod "downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016471877s Apr 9 22:08:43.715: INFO: Pod "downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020503343s STEP: Saw pod success Apr 9 22:08:43.715: INFO: Pod "downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff" satisfied condition "success or failure" Apr 9 22:08:43.717: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff container client-container: STEP: delete the pod Apr 9 22:08:43.782: INFO: Waiting for pod downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff to disappear Apr 9 22:08:43.785: INFO: Pod downwardapi-volume-9bf59692-7ac3-4791-a6d0-f111d52a0bff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:43.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8305" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3811,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:43.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:08:43.859: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 9 22:08:48.902: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 9 22:08:48.902: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 9 22:08:48.956: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7451 /apis/apps/v1/namespaces/deployment-7451/deployments/test-cleanup-deployment 52b695b5-c24b-4ce3-b0ba-8a6c5a8460df 6788680 1 2020-04-09 22:08:48 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022a7d98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 9 22:08:48.997: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7451 /apis/apps/v1/namespaces/deployment-7451/replicasets/test-cleanup-deployment-55ffc6b7b6 39a424ef-7197-435f-a1ea-44928486900f 6788687 1 2020-04-09 22:08:48 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 52b695b5-c24b-4ce3-b0ba-8a6c5a8460df 0xc0053f3387 0xc0053f3388}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053f33f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:08:48.998: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 9 22:08:48.998: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7451 /apis/apps/v1/namespaces/deployment-7451/replicasets/test-cleanup-controller 494be4cc-087e-48de-a1af-8c7ecc087d54 6788682 1 2020-04-09 22:08:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 52b695b5-c24b-4ce3-b0ba-8a6c5a8460df 0xc0053f32b7 0xc0053f32b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0053f3318 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:08:49.023: INFO: Pod "test-cleanup-controller-wjggz" is available: &Pod{ObjectMeta:{test-cleanup-controller-wjggz test-cleanup-controller- deployment-7451 /api/v1/namespaces/deployment-7451/pods/test-cleanup-controller-wjggz 5934e678-46ae-4c28-9438-6fa49beebd4b 6788669 0 2020-04-09 22:08:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 494be4cc-087e-48de-a1af-8c7ecc087d54 0xc0005d42a7 0xc0005d42a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2lvq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2lvq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2lvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.206,StartTime:2020-04-09 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:08:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ed4cb444b2ad98e1b53be7a7cc067d48be420204744185de6078e02907ccd11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:08:49.023: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-h8pvt" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-h8pvt test-cleanup-deployment-55ffc6b7b6- deployment-7451 /api/v1/namespaces/deployment-7451/pods/test-cleanup-deployment-55ffc6b7b6-h8pvt 36a074e4-c39a-4165-88de-6ab5352e7ede 6788689 0 2020-04-09 22:08:48 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 39a424ef-7197-435f-a1ea-44928486900f 0xc0005d4be7 0xc0005d4be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2lvq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2lvq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2lvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:49.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7451" for this suite. • [SLOW TEST:5.272 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":233,"skipped":3816,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:49.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-c7344a05-d255-49eb-9b4b-24a775976f51 STEP: Creating a pod to test consume secrets Apr 9 22:08:49.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f" in namespace "projected-7538" to be "success or failure" Apr 9 22:08:49.158: INFO: Pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.738596ms Apr 9 22:08:51.162: INFO: Pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018715295s Apr 9 22:08:53.178: INFO: Pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f": Phase="Running", Reason="", readiness=true. Elapsed: 4.03438443s Apr 9 22:08:55.182: INFO: Pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038685411s STEP: Saw pod success Apr 9 22:08:55.182: INFO: Pod "pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f" satisfied condition "success or failure" Apr 9 22:08:55.187: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f container secret-volume-test: STEP: delete the pod Apr 9 22:08:55.209: INFO: Waiting for pod pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f to disappear Apr 9 22:08:55.214: INFO: Pod pod-projected-secrets-5cb83c1b-7ecd-41c4-a707-a63cb0ce787f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7538" for this suite. • [SLOW TEST:6.157 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3824,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:55.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 9 22:08:55.270: INFO: Waiting up to 5m0s for pod "pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629" in namespace "emptydir-7785" to be "success or failure" Apr 9 22:08:55.275: INFO: Pod "pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629": Phase="Pending", Reason="", readiness=false. Elapsed: 4.704614ms Apr 9 22:08:57.279: INFO: Pod "pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009058506s Apr 9 22:08:59.283: INFO: Pod "pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013255427s STEP: Saw pod success Apr 9 22:08:59.283: INFO: Pod "pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629" satisfied condition "success or failure" Apr 9 22:08:59.287: INFO: Trying to get logs from node jerma-worker2 pod pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629 container test-container: STEP: delete the pod Apr 9 22:08:59.305: INFO: Waiting for pod pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629 to disappear Apr 9 22:08:59.327: INFO: Pod pod-e9ccf6ee-5632-4648-8dc5-589aaadf8629 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:59.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7785" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:59.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:08:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6080" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3863,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:08:59.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 9 22:09:04.120: INFO: Successfully updated pod "pod-update-144d246e-c6d6-4764-9fa4-a9b8ad55efea" STEP: verifying the updated pod is in kubernetes Apr 9 22:09:04.126: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:04.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1693" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3870,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:04.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2330a175-afe5-49fb-a2ea-b3abae6aaa6c STEP: Creating a pod to test consume configMaps Apr 9 22:09:04.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0" in namespace "configmap-2284" to be "success or failure" Apr 9 22:09:04.207: INFO: Pod "pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455774ms Apr 9 22:09:06.210: INFO: Pod "pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008189542s Apr 9 22:09:08.214: INFO: Pod "pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01221552s STEP: Saw pod success Apr 9 22:09:08.214: INFO: Pod "pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0" satisfied condition "success or failure" Apr 9 22:09:08.217: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0 container configmap-volume-test: STEP: delete the pod Apr 9 22:09:08.232: INFO: Waiting for pod pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0 to disappear Apr 9 22:09:08.237: INFO: Pod pod-configmaps-e7b355c9-11ca-47a6-a30f-e78f0cf31ad0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2284" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3890,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:08.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-33d8d605-103a-40fd-8542-625f3ad706ac [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:08.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6144" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":239,"skipped":3898,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:08.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:09:09.053: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:09:11.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066949, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066949, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066949, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066949, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:09:14.092: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:14.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3860" for this suite. STEP: Destroying namespace "webhook-3860-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.240 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":240,"skipped":3902,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:14.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:09:15.342: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:09:17.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066955, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066955, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066955, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722066955, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:09:20.401: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:20.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4194" for this suite. STEP: Destroying namespace "webhook-4194-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.981 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":241,"skipped":3911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:20.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-84a567e5-d784-4825-8b8f-a2d548efd907 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:24.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4890" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:24.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:42.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-29" for this suite. • [SLOW TEST:17.121 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":243,"skipped":3966,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:42.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 9 22:09:42.066: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 22:09:42.076: INFO: Waiting for terminating namespaces to be deleted... Apr 9 22:09:42.079: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 9 22:09:42.117: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.117: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:09:42.117: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.117: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:09:42.117: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 9 22:09:42.123: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.123: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:09:42.123: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.123: INFO: Container kube-bench ready: false, restart count 0 Apr 9 22:09:42.123: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.123: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:09:42.123: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 9 22:09:42.123: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3638ae55-11aa-4b04-bb13-69f19eeffbad 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3638ae55-11aa-4b04-bb13-69f19eeffbad off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3638ae55-11aa-4b04-bb13-69f19eeffbad [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:50.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1381" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.339 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":244,"skipped":3980,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:50.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:09:56.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2524" for this suite. STEP: Destroying namespace "nsdeletetest-9675" for this suite. Apr 9 22:09:56.627: INFO: Namespace nsdeletetest-9675 was already deleted STEP: Destroying namespace "nsdeletetest-5412" for this suite. • [SLOW TEST:6.271 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":245,"skipped":3982,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:09:56.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-6255 STEP: creating replication controller nodeport-test in namespace services-6255 I0409 22:09:56.802946 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6255, replica count: 2 I0409 22:09:59.853458 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 22:10:02.853717 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 22:10:02.853: INFO: Creating new exec pod Apr 9 22:10:07.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6255 execpodgd9ws -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 9 22:10:08.120: INFO: stderr: "I0409 22:10:08.035984 3350 log.go:172] (0xc000a08000) (0xc000a30000) Create stream\nI0409 22:10:08.036052 3350 log.go:172] (0xc000a08000) (0xc000a30000) Stream added, broadcasting: 1\nI0409 22:10:08.038874 3350 log.go:172] (0xc000a08000) Reply frame received for 1\nI0409 22:10:08.038900 3350 log.go:172] (0xc000a08000) (0xc000a68640) Create stream\nI0409 22:10:08.038908 3350 log.go:172] (0xc000a08000) (0xc000a68640) Stream added, broadcasting: 3\nI0409 22:10:08.039746 3350 log.go:172] (0xc000a08000) Reply frame received for 3\nI0409 22:10:08.039791 3350 log.go:172] (0xc000a08000) (0xc000a301e0) Create stream\nI0409 22:10:08.039804 3350 log.go:172] (0xc000a08000) (0xc000a301e0) Stream added, broadcasting: 5\nI0409 22:10:08.040527 3350 log.go:172] (0xc000a08000) Reply frame received for 5\nI0409 22:10:08.113647 3350 log.go:172] (0xc000a08000) Data frame received for 5\nI0409 22:10:08.113692 3350 log.go:172] (0xc000a301e0) (5) Data frame handling\nI0409 22:10:08.113707 3350 log.go:172] (0xc000a301e0) (5) Data frame sent\nI0409 22:10:08.113718 3350 log.go:172] (0xc000a08000) Data frame received for 5\nI0409 22:10:08.113736 3350 log.go:172] (0xc000a301e0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0409 22:10:08.113801 3350 log.go:172] (0xc000a08000) Data frame received for 3\nI0409 22:10:08.113843 3350 log.go:172] (0xc000a68640) (3) Data frame handling\nI0409 22:10:08.115378 3350 log.go:172] (0xc000a08000) Data frame received for 1\nI0409 22:10:08.115393 3350 log.go:172] (0xc000a30000) (1) Data frame handling\nI0409 22:10:08.115406 3350 log.go:172] (0xc000a30000) (1) Data frame sent\nI0409 22:10:08.115421 3350 log.go:172] (0xc000a08000) (0xc000a30000) Stream removed, broadcasting: 1\nI0409 22:10:08.115894 3350 log.go:172] (0xc000a08000) Go away received\nI0409 22:10:08.116313 3350 log.go:172] (0xc000a08000) (0xc000a30000) Stream removed, broadcasting: 1\nI0409 22:10:08.116346 3350 log.go:172] (0xc000a08000) (0xc000a68640) Stream removed, broadcasting: 3\nI0409 22:10:08.116357 3350 log.go:172] (0xc000a08000) (0xc000a301e0) Stream removed, broadcasting: 5\n" Apr 9 22:10:08.121: INFO: stdout: "" Apr 9 22:10:08.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6255 execpodgd9ws -- /bin/sh -x -c nc -zv -t -w 2 10.96.193.198 80' Apr 9 22:10:08.350: INFO: stderr: "I0409 22:10:08.258722 3370 log.go:172] (0xc000234a50) (0xc0006b7ae0) Create stream\nI0409 22:10:08.258809 3370 log.go:172] (0xc000234a50) (0xc0006b7ae0) Stream added, broadcasting: 1\nI0409 22:10:08.262317 3370 log.go:172] (0xc000234a50) Reply frame received for 1\nI0409 22:10:08.262392 3370 log.go:172] (0xc000234a50) (0xc000aa2000) Create stream\nI0409 22:10:08.262417 3370 log.go:172] (0xc000234a50) (0xc000aa2000) Stream added, broadcasting: 3\nI0409 22:10:08.263624 3370 log.go:172] (0xc000234a50) Reply frame received for 3\nI0409 22:10:08.263662 3370 log.go:172] (0xc000234a50) (0xc0006b7cc0) Create stream\nI0409 22:10:08.263679 3370 log.go:172] (0xc000234a50) (0xc0006b7cc0) Stream added, broadcasting: 5\nI0409 22:10:08.264650 3370 log.go:172] (0xc000234a50) Reply frame received for 5\nI0409 22:10:08.340342 3370 log.go:172] (0xc000234a50) Data frame received for 3\nI0409 22:10:08.340415 3370 log.go:172] (0xc000aa2000) (3) Data frame handling\nI0409 22:10:08.340500 3370 log.go:172] (0xc000234a50) Data frame received for 5\nI0409 22:10:08.340526 3370 log.go:172] (0xc0006b7cc0) (5) Data frame handling\nI0409 22:10:08.340543 3370 log.go:172] (0xc0006b7cc0) (5) Data frame sent\nI0409 22:10:08.340551 3370 log.go:172] (0xc000234a50) Data frame received for 5\nI0409 22:10:08.340559 3370 log.go:172] (0xc0006b7cc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.193.198 80\nConnection to 10.96.193.198 80 port [tcp/http] succeeded!\nI0409 22:10:08.346515 3370 log.go:172] (0xc000234a50) Data frame received for 1\nI0409 22:10:08.346535 3370 log.go:172] (0xc0006b7ae0) (1) Data frame handling\nI0409 22:10:08.346545 3370 log.go:172] (0xc0006b7ae0) (1) Data frame sent\nI0409 22:10:08.346555 3370 log.go:172] (0xc000234a50) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0409 22:10:08.346563 3370 log.go:172] (0xc000234a50) Go away received\nI0409 22:10:08.346810 3370 log.go:172] (0xc000234a50) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0409 22:10:08.346831 3370 log.go:172] (0xc000234a50) (0xc000aa2000) Stream removed, broadcasting: 3\nI0409 22:10:08.346837 3370 log.go:172] (0xc000234a50) (0xc0006b7cc0) Stream removed, broadcasting: 5\n" Apr 9 22:10:08.350: INFO: stdout: "" Apr 9 22:10:08.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6255 execpodgd9ws -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30323' Apr 9 22:10:08.559: INFO: stderr: "I0409 22:10:08.472553 3391 log.go:172] (0xc000104a50) (0xc00073dae0) Create stream\nI0409 22:10:08.472608 3391 log.go:172] (0xc000104a50) (0xc00073dae0) Stream added, broadcasting: 1\nI0409 22:10:08.475375 3391 log.go:172] (0xc000104a50) Reply frame received for 1\nI0409 22:10:08.475406 3391 log.go:172] (0xc000104a50) (0xc000b0a000) Create stream\nI0409 22:10:08.475415 3391 log.go:172] (0xc000104a50) (0xc000b0a000) Stream added, broadcasting: 3\nI0409 22:10:08.476420 3391 log.go:172] (0xc000104a50) Reply frame received for 3\nI0409 22:10:08.476474 3391 log.go:172] (0xc000104a50) (0xc000308000) Create stream\nI0409 22:10:08.476490 3391 log.go:172] (0xc000104a50) (0xc000308000) Stream added, broadcasting: 5\nI0409 22:10:08.477743 3391 log.go:172] (0xc000104a50) Reply frame received for 5\nI0409 22:10:08.554332 3391 log.go:172] (0xc000104a50) Data frame received for 5\nI0409 22:10:08.554353 3391 log.go:172] (0xc000308000) (5) Data frame handling\nI0409 22:10:08.554361 3391 log.go:172] (0xc000308000) (5) Data frame sent\nI0409 22:10:08.554366 3391 log.go:172] (0xc000104a50) Data frame received for 5\nI0409 22:10:08.554377 3391 log.go:172] (0xc000308000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30323\nConnection to 172.17.0.10 30323 port [tcp/30323] succeeded!\nI0409 22:10:08.554393 3391 log.go:172] (0xc000104a50) Data frame received for 3\nI0409 22:10:08.554399 3391 log.go:172] (0xc000b0a000) (3) Data frame handling\nI0409 22:10:08.555515 3391 log.go:172] (0xc000104a50) Data frame received for 1\nI0409 22:10:08.555543 3391 log.go:172] (0xc00073dae0) (1) Data frame handling\nI0409 22:10:08.555553 3391 log.go:172] (0xc00073dae0) (1) Data frame sent\nI0409 22:10:08.555566 3391 log.go:172] (0xc000104a50) (0xc00073dae0) Stream removed, broadcasting: 1\nI0409 22:10:08.555587 3391 log.go:172] (0xc000104a50) Go away received\nI0409 22:10:08.555947 3391 log.go:172] (0xc000104a50) (0xc00073dae0) Stream removed, broadcasting: 1\nI0409 22:10:08.555962 3391 log.go:172] (0xc000104a50) (0xc000b0a000) Stream removed, broadcasting: 3\nI0409 22:10:08.555969 3391 log.go:172] (0xc000104a50) (0xc000308000) Stream removed, broadcasting: 5\n" Apr 9 22:10:08.559: INFO: stdout: "" Apr 9 22:10:08.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6255 execpodgd9ws -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30323' Apr 9 22:10:08.747: INFO: stderr: "I0409 22:10:08.688242 3414 log.go:172] (0xc0000f4e70) (0xc0006a7c20) Create stream\nI0409 22:10:08.688289 3414 log.go:172] (0xc0000f4e70) (0xc0006a7c20) Stream added, broadcasting: 1\nI0409 22:10:08.690184 3414 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0409 22:10:08.690228 3414 log.go:172] (0xc0000f4e70) (0xc000992000) Create stream\nI0409 22:10:08.690262 3414 log.go:172] (0xc0000f4e70) (0xc000992000) Stream added, broadcasting: 3\nI0409 22:10:08.691212 3414 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0409 22:10:08.691270 3414 log.go:172] (0xc0000f4e70) (0xc000314000) Create stream\nI0409 22:10:08.691297 3414 log.go:172] (0xc0000f4e70) (0xc000314000) Stream added, broadcasting: 5\nI0409 22:10:08.692288 3414 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0409 22:10:08.739466 3414 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0409 22:10:08.739511 3414 log.go:172] (0xc000992000) (3) Data frame handling\nI0409 22:10:08.739548 3414 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0409 22:10:08.739575 3414 log.go:172] (0xc000314000) (5) Data frame handling\nI0409 22:10:08.739610 3414 log.go:172] (0xc000314000) (5) Data frame sent\nI0409 22:10:08.739624 3414 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0409 22:10:08.739634 3414 log.go:172] (0xc000314000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30323\nConnection to 172.17.0.8 30323 port [tcp/30323] succeeded!\nI0409 22:10:08.739685 3414 log.go:172] (0xc000314000) (5) Data frame sent\nI0409 22:10:08.739719 3414 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0409 22:10:08.739744 3414 log.go:172] (0xc000314000) (5) Data frame handling\nI0409 22:10:08.741815 3414 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0409 22:10:08.741841 3414 log.go:172] (0xc0006a7c20) (1) Data frame handling\nI0409 22:10:08.741861 3414 log.go:172] (0xc0006a7c20) (1) Data frame sent\nI0409 22:10:08.741879 3414 log.go:172] (0xc0000f4e70) (0xc0006a7c20) Stream removed, broadcasting: 1\nI0409 22:10:08.741898 3414 log.go:172] (0xc0000f4e70) Go away received\nI0409 22:10:08.742281 3414 log.go:172] (0xc0000f4e70) (0xc0006a7c20) Stream removed, broadcasting: 1\nI0409 22:10:08.742306 3414 log.go:172] (0xc0000f4e70) (0xc000992000) Stream removed, broadcasting: 3\nI0409 22:10:08.742317 3414 log.go:172] (0xc0000f4e70) (0xc000314000) Stream removed, broadcasting: 5\n" Apr 9 22:10:08.747: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:08.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6255" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.124 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":246,"skipped":4003,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:08.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e974df75-74d3-42f4-a154-07188a66e723 STEP: Creating a pod to test consume secrets Apr 9 22:10:09.025: INFO: Waiting up to 5m0s for pod "pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935" in namespace "secrets-2235" to be "success or failure" Apr 9 22:10:09.032: INFO: Pod "pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409703ms Apr 9 22:10:11.071: INFO: Pod "pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045832594s Apr 9 22:10:13.075: INFO: Pod "pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049982545s STEP: Saw pod success Apr 9 22:10:13.075: INFO: Pod "pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935" satisfied condition "success or failure" Apr 9 22:10:13.079: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935 container secret-volume-test: STEP: delete the pod Apr 9 22:10:13.120: INFO: Waiting for pod pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935 to disappear Apr 9 22:10:13.127: INFO: Pod pod-secrets-5d4ee058-37ce-43a8-aba6-db0b7d680935 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:13.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2235" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4009,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:13.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 9 22:10:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8446' Apr 9 22:10:13.430: INFO: stderr: "" Apr 9 22:10:13.430: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 22:10:13.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8446' Apr 9 22:10:13.531: INFO: stderr: "" Apr 9 22:10:13.531: INFO: stdout: "update-demo-nautilus-grg78 update-demo-nautilus-x6v9r " Apr 9 22:10:13.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grg78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:13.629: INFO: stderr: "" Apr 9 22:10:13.629: INFO: stdout: "" Apr 9 22:10:13.629: INFO: update-demo-nautilus-grg78 is created but not running Apr 9 22:10:18.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8446' Apr 9 22:10:18.728: INFO: stderr: "" Apr 9 22:10:18.728: INFO: stdout: "update-demo-nautilus-grg78 update-demo-nautilus-x6v9r " Apr 9 22:10:18.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grg78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:18.836: INFO: stderr: "" Apr 9 22:10:18.836: INFO: stdout: "true" Apr 9 22:10:18.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grg78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:18.938: INFO: stderr: "" Apr 9 22:10:18.938: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 22:10:18.938: INFO: validating pod update-demo-nautilus-grg78 Apr 9 22:10:18.943: INFO: got data: { "image": "nautilus.jpg" } Apr 9 22:10:18.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 22:10:18.943: INFO: update-demo-nautilus-grg78 is verified up and running Apr 9 22:10:18.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x6v9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:19.031: INFO: stderr: "" Apr 9 22:10:19.031: INFO: stdout: "true" Apr 9 22:10:19.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x6v9r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:19.120: INFO: stderr: "" Apr 9 22:10:19.120: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 22:10:19.120: INFO: validating pod update-demo-nautilus-x6v9r Apr 9 22:10:19.123: INFO: got data: { "image": "nautilus.jpg" } Apr 9 22:10:19.123: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 22:10:19.123: INFO: update-demo-nautilus-x6v9r is verified up and running STEP: rolling-update to new replication controller Apr 9 22:10:19.125: INFO: scanned /root for discovery docs: Apr 9 22:10:19.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8446' Apr 9 22:10:41.688: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 9 22:10:41.688: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 22:10:41.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8446' Apr 9 22:10:41.810: INFO: stderr: "" Apr 9 22:10:41.810: INFO: stdout: "update-demo-kitten-6lrdm update-demo-kitten-kcxpw " Apr 9 22:10:41.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6lrdm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:41.906: INFO: stderr: "" Apr 9 22:10:41.906: INFO: stdout: "true" Apr 9 22:10:41.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6lrdm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:42.007: INFO: stderr: "" Apr 9 22:10:42.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 9 22:10:42.007: INFO: validating pod update-demo-kitten-6lrdm Apr 9 22:10:42.011: INFO: got data: { "image": "kitten.jpg" } Apr 9 22:10:42.012: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 9 22:10:42.012: INFO: update-demo-kitten-6lrdm is verified up and running Apr 9 22:10:42.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kcxpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:42.107: INFO: stderr: "" Apr 9 22:10:42.107: INFO: stdout: "true" Apr 9 22:10:42.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kcxpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8446' Apr 9 22:10:42.234: INFO: stderr: "" Apr 9 22:10:42.234: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 9 22:10:42.234: INFO: validating pod update-demo-kitten-kcxpw Apr 9 22:10:42.238: INFO: got data: { "image": "kitten.jpg" } Apr 9 22:10:42.239: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 9 22:10:42.239: INFO: update-demo-kitten-kcxpw is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:42.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8446" for this suite. • [SLOW TEST:29.096 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":248,"skipped":4015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:42.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:42.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9063" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":249,"skipped":4038,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:42.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 9 22:10:48.563: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7305 PodName:pod-sharedvolume-5c03d7bc-dbe2-4db3-8df1-dd61c3cbcc87 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 22:10:48.563: INFO: >>> kubeConfig: /root/.kube/config I0409 22:10:48.597373 6 log.go:172] (0xc001d0a4d0) (0xc000d50fa0) Create stream I0409 22:10:48.597403 6 log.go:172] (0xc001d0a4d0) (0xc000d50fa0) Stream added, broadcasting: 1 I0409 22:10:48.599099 6 log.go:172] (0xc001d0a4d0) Reply frame received for 1 I0409 22:10:48.599136 6 log.go:172] (0xc001d0a4d0) (0xc001d64320) Create stream I0409 22:10:48.599150 6 log.go:172] (0xc001d0a4d0) (0xc001d64320) Stream added, broadcasting: 3 I0409 22:10:48.600172 6 log.go:172] (0xc001d0a4d0) Reply frame received for 3 I0409 22:10:48.600229 6 log.go:172] (0xc001d0a4d0) (0xc000d51180) Create stream I0409 22:10:48.600247 6 log.go:172] (0xc001d0a4d0) (0xc000d51180) Stream added, broadcasting: 5 I0409 22:10:48.601023 6 log.go:172] (0xc001d0a4d0) Reply frame received for 5 I0409 22:10:48.671111 6 log.go:172] (0xc001d0a4d0) Data frame received for 5 I0409 22:10:48.671160 6 log.go:172] (0xc000d51180) (5) Data frame handling I0409 22:10:48.671185 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 22:10:48.671194 6 log.go:172] (0xc001d64320) (3) Data frame handling I0409 22:10:48.671205 6 log.go:172] (0xc001d64320) (3) Data frame sent I0409 22:10:48.671215 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 22:10:48.671223 6 log.go:172] (0xc001d64320) (3) Data frame handling I0409 22:10:48.672694 6 log.go:172] (0xc001d0a4d0) Data frame received for 1 I0409 22:10:48.672723 6 log.go:172] (0xc000d50fa0) (1) Data frame handling I0409 22:10:48.672735 6 log.go:172] (0xc000d50fa0) (1) Data frame sent I0409 22:10:48.672745 6 log.go:172] (0xc001d0a4d0) (0xc000d50fa0) Stream removed, broadcasting: 1 I0409 22:10:48.672772 6 log.go:172] (0xc001d0a4d0) Go away received I0409 22:10:48.672813 6 log.go:172] (0xc001d0a4d0) (0xc000d50fa0) Stream removed, broadcasting: 1 I0409 22:10:48.672825 6 log.go:172] (0xc001d0a4d0) (0xc001d64320) Stream removed, broadcasting: 3 I0409 22:10:48.672845 6 log.go:172] (0xc001d0a4d0) (0xc000d51180) Stream removed, broadcasting: 5 Apr 9 22:10:48.672: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:48.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7305" for this suite. • [SLOW TEST:6.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":250,"skipped":4054,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:48.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 9 22:10:48.870: INFO: Waiting up to 5m0s for pod "downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38" in namespace "downward-api-1732" to be "success or failure" Apr 9 22:10:48.875: INFO: Pod "downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.787706ms Apr 9 22:10:50.879: INFO: Pod "downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008738153s Apr 9 22:10:52.883: INFO: Pod "downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01310339s STEP: Saw pod success Apr 9 22:10:52.883: INFO: Pod "downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38" satisfied condition "success or failure" Apr 9 22:10:52.886: INFO: Trying to get logs from node jerma-worker2 pod downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38 container dapi-container: STEP: delete the pod Apr 9 22:10:52.921: INFO: Waiting for pod downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38 to disappear Apr 9 22:10:52.943: INFO: Pod downward-api-8d9ae2aa-b6a1-4c87-9b92-c12b4eda2f38 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:52.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1732" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4076,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:52.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:53.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2147" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":252,"skipped":4094,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:53.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 22:10:53.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4377' Apr 9 22:10:53.289: INFO: stderr: "" Apr 9 22:10:53.289: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 9 22:10:53.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4377' Apr 9 22:10:53.501: INFO: stderr: "" Apr 9 22:10:53.501: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:10:53.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4377" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":253,"skipped":4095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:10:53.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:10:53.613: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 9 22:10:56.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 create -f -' Apr 9 22:10:59.198: INFO: stderr: "" Apr 9 22:10:59.198: INFO: stdout: "e2e-test-crd-publish-openapi-3599-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 9 22:10:59.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 delete e2e-test-crd-publish-openapi-3599-crds test-foo' Apr 9 22:10:59.310: INFO: stderr: "" Apr 9 22:10:59.310: INFO: stdout: "e2e-test-crd-publish-openapi-3599-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 9 22:10:59.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 apply -f -' Apr 9 22:10:59.543: INFO: stderr: "" Apr 9 22:10:59.543: INFO: stdout: "e2e-test-crd-publish-openapi-3599-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 9 22:10:59.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 delete e2e-test-crd-publish-openapi-3599-crds test-foo' Apr 9 22:10:59.729: INFO: stderr: "" Apr 9 22:10:59.729: INFO: stdout: "e2e-test-crd-publish-openapi-3599-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 9 22:10:59.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 create -f -' Apr 9 22:10:59.992: INFO: rc: 1 Apr 9 22:10:59.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 apply -f -' Apr 9 22:11:00.277: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 9 22:11:00.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 create -f -' Apr 9 22:11:00.520: INFO: rc: 1 Apr 9 22:11:00.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-812 apply -f -' Apr 9 22:11:00.738: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 9 22:11:00.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3599-crds' Apr 9 22:11:00.972: INFO: stderr: "" Apr 9 22:11:00.972: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3599-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 9 22:11:00.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3599-crds.metadata' Apr 9 22:11:01.214: INFO: stderr: "" Apr 9 22:11:01.214: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3599-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 9 22:11:01.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3599-crds.spec' Apr 9 22:11:01.437: INFO: stderr: "" Apr 9 22:11:01.437: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3599-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 9 22:11:01.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3599-crds.spec.bars' Apr 9 22:11:01.692: INFO: stderr: "" Apr 9 22:11:01.692: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3599-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 9 22:11:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3599-crds.spec.bars2' Apr 9 22:11:02.001: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:04.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-812" for this suite. • [SLOW TEST:11.463 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":254,"skipped":4122,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:04.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:11:05.043: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 9 22:11:10.062: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 9 22:11:10.062: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 9 22:11:12.066: INFO: Creating deployment "test-rollover-deployment" Apr 9 22:11:12.075: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 9 22:11:14.082: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 9 22:11:14.088: INFO: Ensure that both replica sets have 1 created replica Apr 9 22:11:14.094: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 9 22:11:14.106: INFO: Updating deployment test-rollover-deployment Apr 9 22:11:14.106: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 9 22:11:16.168: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 9 22:11:16.173: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 9 22:11:16.616: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:16.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067074, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:18.624: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:18.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067077, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:20.625: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:20.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067077, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:22.624: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:22.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067077, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:24.624: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:24.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067077, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:26.624: INFO: all replica sets need to contain the pod-template-hash label Apr 9 22:11:26.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067077, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067072, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 22:11:28.624: INFO: Apr 9 22:11:28.624: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 9 22:11:28.632: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4148 /apis/apps/v1/namespaces/deployment-4148/deployments/test-rollover-deployment bcf1a0c1-6aa3-4e28-820c-a24c8dc93f2c 6790029 2 2020-04-09 22:11:12 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056de5b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-09 22:11:12 +0000 UTC,LastTransitionTime:2020-04-09 22:11:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-09 22:11:27 +0000 UTC,LastTransitionTime:2020-04-09 22:11:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 9 22:11:28.636: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4148 /apis/apps/v1/namespaces/deployment-4148/replicasets/test-rollover-deployment-574d6dfbff 7e66f779-8698-4429-9d2a-f6c2e6c002eb 6790018 2 2020-04-09 22:11:14 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment bcf1a0c1-6aa3-4e28-820c-a24c8dc93f2c 0xc0055eb227 0xc0055eb228}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055eb2a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:11:28.636: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 9 22:11:28.636: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4148 /apis/apps/v1/namespaces/deployment-4148/replicasets/test-rollover-controller e0679213-1a60-466d-b11f-ae7c3f074041 6790027 2 2020-04-09 22:11:05 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment bcf1a0c1-6aa3-4e28-820c-a24c8dc93f2c 0xc0055eb137 0xc0055eb138}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055eb1a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:11:28.636: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4148 /apis/apps/v1/namespaces/deployment-4148/replicasets/test-rollover-deployment-f6c94f66c e66d6507-e590-4b36-8f9d-3e2ea428c3a5 6789971 2 2020-04-09 22:11:12 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment bcf1a0c1-6aa3-4e28-820c-a24c8dc93f2c 0xc0055eb320 0xc0055eb321}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055eb3b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:11:28.640: INFO: Pod "test-rollover-deployment-574d6dfbff-gxpxf" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-gxpxf test-rollover-deployment-574d6dfbff- deployment-4148 /api/v1/namespaces/deployment-4148/pods/test-rollover-deployment-574d6dfbff-gxpxf beaec07c-3bf5-4a94-a0fa-4736981654c4 6789985 0 2020-04-09 22:11:14 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7e66f779-8698-4429-9d2a-f6c2e6c002eb 0xc0056dea37 0xc0056dea38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bbsnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bbsnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bbsnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:11:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.217,StartTime:2020-04-09 22:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6676b9d30cb4394e0fa38d57427ad76874ae42312ead58df751b84e096c1656c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:28.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4148" for this suite. • [SLOW TEST:23.675 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":255,"skipped":4123,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:28.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2545 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2545 STEP: Deleting pre-stop pod Apr 9 22:11:41.765: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:41.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2545" for this suite. • [SLOW TEST:13.171 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":256,"skipped":4123,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:41.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 9 22:11:41.990: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3448 /api/v1/namespaces/watch-3448/configmaps/e2e-watch-test-resource-version 993dd9b3-735a-4fd6-b95a-e40d34470353 6790141 0 2020-04-09 22:11:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 9 22:11:41.990: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3448 /api/v1/namespaces/watch-3448/configmaps/e2e-watch-test-resource-version 993dd9b3-735a-4fd6-b95a-e40d34470353 6790142 0 2020-04-09 22:11:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:41.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3448" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":257,"skipped":4127,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:42.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d912827a-31aa-4acf-ba81-42fe55c6ec08 STEP: Creating a pod to test consume configMaps Apr 9 22:11:42.264: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f" in namespace "configmap-5059" to be "success or failure" Apr 9 22:11:42.283: INFO: Pod "pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.446324ms Apr 9 22:11:44.287: INFO: Pod "pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022948606s Apr 9 22:11:46.290: INFO: Pod "pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026668098s STEP: Saw pod success Apr 9 22:11:46.290: INFO: Pod "pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f" satisfied condition "success or failure" Apr 9 22:11:46.293: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f container configmap-volume-test: STEP: delete the pod Apr 9 22:11:46.341: INFO: Waiting for pod pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f to disappear Apr 9 22:11:46.377: INFO: Pod pod-configmaps-b6a725c5-2c36-4d57-a376-329272992b5f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:46.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5059" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4128,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:46.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 9 22:11:50.496: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:11:50.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7909" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:11:50.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:11:51.225: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:11:53.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067111, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067111, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067111, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067111, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:11:56.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:06.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2062" for this suite. STEP: Destroying namespace "webhook-2062-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":260,"skipped":4173,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:06.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 9 22:12:06.613: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:14.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1468" for this suite. • [SLOW TEST:8.199 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":261,"skipped":4176,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:14.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 9 22:12:14.799: INFO: Created pod &Pod{ObjectMeta:{dns-2624 dns-2624 /api/v1/namespaces/dns-2624/pods/dns-2624 1f418b11-fc49-4c7f-b7d2-7eb860cd2a50 6790409 0 2020-04-09 22:12:14 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dllvk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dllvk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dllvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 9 22:12:18.834: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2624 PodName:dns-2624 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 22:12:18.834: INFO: >>> kubeConfig: /root/.kube/config I0409 22:12:18.868100 6 log.go:172] (0xc002142370) (0xc000fca460) Create stream I0409 22:12:18.868133 6 log.go:172] (0xc002142370) (0xc000fca460) Stream added, broadcasting: 1 I0409 22:12:18.870249 6 log.go:172] (0xc002142370) Reply frame received for 1 I0409 22:12:18.870292 6 log.go:172] (0xc002142370) (0xc001232820) Create stream I0409 22:12:18.870308 6 log.go:172] (0xc002142370) (0xc001232820) Stream added, broadcasting: 3 I0409 22:12:18.871177 6 log.go:172] (0xc002142370) Reply frame received for 3 I0409 22:12:18.871217 6 log.go:172] (0xc002142370) (0xc001232dc0) Create stream I0409 22:12:18.871233 6 log.go:172] (0xc002142370) (0xc001232dc0) Stream added, broadcasting: 5 I0409 22:12:18.872270 6 log.go:172] (0xc002142370) Reply frame received for 5 I0409 22:12:18.965380 6 log.go:172] (0xc002142370) Data frame received for 3 I0409 22:12:18.965408 6 log.go:172] (0xc001232820) (3) Data frame handling I0409 22:12:18.965420 6 log.go:172] (0xc001232820) (3) Data frame sent I0409 22:12:18.966223 6 log.go:172] (0xc002142370) Data frame received for 3 I0409 22:12:18.966254 6 log.go:172] (0xc001232820) (3) Data frame handling I0409 22:12:18.966418 6 log.go:172] (0xc002142370) Data frame received for 5 I0409 22:12:18.966433 6 log.go:172] (0xc001232dc0) (5) Data frame handling I0409 22:12:18.968619 6 log.go:172] (0xc002142370) Data frame received for 1 I0409 22:12:18.968635 6 log.go:172] (0xc000fca460) (1) Data frame handling I0409 22:12:18.968647 6 log.go:172] (0xc000fca460) (1) Data frame sent I0409 22:12:18.968665 6 log.go:172] (0xc002142370) (0xc000fca460) Stream removed, broadcasting: 1 I0409 22:12:18.968690 6 log.go:172] (0xc002142370) Go away received I0409 22:12:18.968798 6 log.go:172] (0xc002142370) (0xc000fca460) Stream removed, broadcasting: 1 I0409 22:12:18.968818 6 log.go:172] (0xc002142370) (0xc001232820) Stream removed, broadcasting: 3 I0409 22:12:18.968828 6 log.go:172] (0xc002142370) (0xc001232dc0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 9 22:12:18.968: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2624 PodName:dns-2624 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 22:12:18.968: INFO: >>> kubeConfig: /root/.kube/config I0409 22:12:18.996924 6 log.go:172] (0xc001d0a4d0) (0xc001233540) Create stream I0409 22:12:18.996950 6 log.go:172] (0xc001d0a4d0) (0xc001233540) Stream added, broadcasting: 1 I0409 22:12:18.998927 6 log.go:172] (0xc001d0a4d0) Reply frame received for 1 I0409 22:12:18.998972 6 log.go:172] (0xc001d0a4d0) (0xc0027efea0) Create stream I0409 22:12:18.998987 6 log.go:172] (0xc001d0a4d0) (0xc0027efea0) Stream added, broadcasting: 3 I0409 22:12:18.999762 6 log.go:172] (0xc001d0a4d0) Reply frame received for 3 I0409 22:12:18.999801 6 log.go:172] (0xc001d0a4d0) (0xc0027eff40) Create stream I0409 22:12:18.999815 6 log.go:172] (0xc001d0a4d0) (0xc0027eff40) Stream added, broadcasting: 5 I0409 22:12:19.000721 6 log.go:172] (0xc001d0a4d0) Reply frame received for 5 I0409 22:12:19.070045 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 22:12:19.070074 6 log.go:172] (0xc0027efea0) (3) Data frame handling I0409 22:12:19.070093 6 log.go:172] (0xc0027efea0) (3) Data frame sent I0409 22:12:19.070898 6 log.go:172] (0xc001d0a4d0) Data frame received for 3 I0409 22:12:19.070961 6 log.go:172] (0xc0027efea0) (3) Data frame handling I0409 22:12:19.071038 6 log.go:172] (0xc001d0a4d0) Data frame received for 5 I0409 22:12:19.071068 6 log.go:172] (0xc0027eff40) (5) Data frame handling I0409 22:12:19.073264 6 log.go:172] (0xc001d0a4d0) Data frame received for 1 I0409 22:12:19.073422 6 log.go:172] (0xc001233540) (1) Data frame handling I0409 22:12:19.073495 6 log.go:172] (0xc001233540) (1) Data frame sent I0409 22:12:19.073556 6 log.go:172] (0xc001d0a4d0) (0xc001233540) Stream removed, broadcasting: 1 I0409 22:12:19.073620 6 log.go:172] (0xc001d0a4d0) Go away received I0409 22:12:19.073731 6 log.go:172] (0xc001d0a4d0) (0xc001233540) Stream removed, broadcasting: 1 I0409 22:12:19.073824 6 log.go:172] (0xc001d0a4d0) (0xc0027efea0) Stream removed, broadcasting: 3 I0409 22:12:19.073886 6 log.go:172] (0xc001d0a4d0) (0xc0027eff40) Stream removed, broadcasting: 5 Apr 9 22:12:19.073: INFO: Deleting pod dns-2624... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:19.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2624" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":262,"skipped":4176,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:19.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-acd0b451-b531-4bd9-88f2-d9f16e6f0484 STEP: Creating a pod to test consume configMaps Apr 9 22:12:19.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a" in namespace "configmap-340" to be "success or failure" Apr 9 22:12:19.261: INFO: Pod "pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.908225ms Apr 9 22:12:21.265: INFO: Pod "pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050189458s Apr 9 22:12:23.269: INFO: Pod "pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054009383s STEP: Saw pod success Apr 9 22:12:23.269: INFO: Pod "pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a" satisfied condition "success or failure" Apr 9 22:12:23.272: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a container configmap-volume-test: STEP: delete the pod Apr 9 22:12:23.319: INFO: Waiting for pod pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a to disappear Apr 9 22:12:23.332: INFO: Pod pod-configmaps-216a4544-e356-4e13-93cb-127924b4751a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:23.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-340" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:23.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1733 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1733 to expose endpoints map[] Apr 9 22:12:23.468: INFO: Get endpoints failed (15.878544ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 9 22:12:24.471: INFO: successfully validated that service endpoint-test2 in namespace services-1733 exposes endpoints map[] (1.01932182s elapsed) STEP: Creating pod pod1 in namespace services-1733 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1733 to expose endpoints map[pod1:[80]] Apr 9 22:12:27.556: INFO: successfully validated that service endpoint-test2 in namespace services-1733 exposes endpoints map[pod1:[80]] (3.078506946s elapsed) STEP: Creating pod pod2 in namespace services-1733 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1733 to expose endpoints map[pod1:[80] pod2:[80]] Apr 9 22:12:30.652: INFO: successfully validated that service endpoint-test2 in namespace services-1733 exposes endpoints map[pod1:[80] pod2:[80]] (3.091529941s elapsed) STEP: Deleting pod pod1 in namespace services-1733 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1733 to expose endpoints map[pod2:[80]] Apr 9 22:12:31.674: INFO: successfully validated that service endpoint-test2 in namespace services-1733 exposes endpoints map[pod2:[80]] (1.017415429s elapsed) STEP: Deleting pod pod2 in namespace services-1733 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1733 to expose endpoints map[] Apr 9 22:12:32.702: INFO: successfully validated that service endpoint-test2 in namespace services-1733 exposes endpoints map[] (1.023585769s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:32.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1733" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.479 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":264,"skipped":4252,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:32.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:12:32.877: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:33.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9594" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":265,"skipped":4263,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:33.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 22:12:34.078: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 22:12:36.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067154, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067154, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067154, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722067154, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 22:12:39.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:12:39.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7000-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:40.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7030" for this suite. STEP: Destroying namespace "webhook-7030-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.866 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":266,"skipped":4264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:40.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 9 22:12:40.392: INFO: Waiting up to 5m0s for pod "pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478" in namespace "emptydir-3126" to be "success or failure" Apr 9 22:12:40.408: INFO: Pod "pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478": Phase="Pending", Reason="", readiness=false. Elapsed: 16.622424ms Apr 9 22:12:42.412: INFO: Pod "pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020532981s Apr 9 22:12:44.416: INFO: Pod "pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024553146s STEP: Saw pod success Apr 9 22:12:44.416: INFO: Pod "pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478" satisfied condition "success or failure" Apr 9 22:12:44.419: INFO: Trying to get logs from node jerma-worker2 pod pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478 container test-container: STEP: delete the pod Apr 9 22:12:44.440: INFO: Waiting for pod pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478 to disappear Apr 9 22:12:44.444: INFO: Pod pod-83f322e0-5f2d-4a6b-9a35-4b5f3bb4e478 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:44.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3126" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4291,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:44.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af Apr 9 22:12:44.558: INFO: Pod name my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af: Found 0 pods out of 1 Apr 9 22:12:49.570: INFO: Pod name my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af: Found 1 pods out of 1 Apr 9 22:12:49.570: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af" are running Apr 9 22:12:49.576: INFO: Pod "my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af-44j9q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 22:12:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 22:12:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 22:12:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-09 22:12:44 +0000 UTC Reason: Message:}]) Apr 9 22:12:49.576: INFO: Trying to dial the pod Apr 9 22:12:54.588: INFO: Controller my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af: Got expected result from replica 1 [my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af-44j9q]: "my-hostname-basic-589d5a70-af33-4333-8cc8-231cde3ee6af-44j9q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:12:54.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5163" for this suite. • [SLOW TEST:10.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":268,"skipped":4298,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:12:54.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:13:08.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9904" for this suite. • [SLOW TEST:14.085 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":269,"skipped":4315,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:13:08.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 9 22:13:08.747: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 22:13:08.771: INFO: Waiting for terminating namespaces to be deleted... Apr 9 22:13:08.798: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 9 22:13:08.806: INFO: fail-once-local-wvbbh from job-9904 started at 2020-04-09 22:12:54 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.806: INFO: Container c ready: false, restart count 1 Apr 9 22:13:08.806: INFO: fail-once-local-gmhzd from job-9904 started at 2020-04-09 22:13:01 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.806: INFO: Container c ready: false, restart count 1 Apr 9 22:13:08.806: INFO: fail-once-local-vw69v from job-9904 started at 2020-04-09 22:13:01 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.806: INFO: Container c ready: false, restart count 1 Apr 9 22:13:08.806: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.806: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:13:08.806: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.806: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:13:08.806: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 9 22:13:08.812: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.812: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 22:13:08.812: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.812: INFO: Container kube-bench ready: false, restart count 0 Apr 9 22:13:08.812: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.812: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 22:13:08.812: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.812: INFO: Container kube-hunter ready: false, restart count 0 Apr 9 22:13:08.812: INFO: fail-once-local-5kbfx from job-9904 started at 2020-04-09 22:12:54 +0000 UTC (1 container statuses recorded) Apr 9 22:13:08.812: INFO: Container c ready: false, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-92bf17f6-b23e-4a22-ba24-ca2341198136 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-92bf17f6-b23e-4a22-ba24-ca2341198136 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-92bf17f6-b23e-4a22-ba24-ca2341198136 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:18:17.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.611 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":270,"skipped":4316,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:18:17.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:18:22.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4460" for this suite. • [SLOW TEST:5.218 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":271,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:18:22.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:18:22.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 9 22:18:23.370: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:23Z generation:1 name:name1 resourceVersion:6792012 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e1083155-a415-4b53-9198-f054f126b62b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 9 22:18:33.376: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:33Z generation:1 name:name2 resourceVersion:6792062 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406d6819-29fc-411e-89ea-3bdee06043ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 9 22:18:43.383: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:23Z generation:2 name:name1 resourceVersion:6792092 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e1083155-a415-4b53-9198-f054f126b62b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 9 22:18:53.389: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:33Z generation:2 name:name2 resourceVersion:6792122 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406d6819-29fc-411e-89ea-3bdee06043ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 9 22:19:03.398: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:23Z generation:2 name:name1 resourceVersion:6792153 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e1083155-a415-4b53-9198-f054f126b62b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 9 22:19:13.406: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T22:18:33Z generation:2 name:name2 resourceVersion:6792183 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406d6819-29fc-411e-89ea-3bdee06043ea] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:19:23.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3487" for this suite. • [SLOW TEST:61.413 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":272,"skipped":4372,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:19:23.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:19:23.989: INFO: Creating deployment "webserver-deployment" Apr 9 22:19:24.004: INFO: Waiting for observed generation 1 Apr 9 22:19:26.163: INFO: Waiting for all required pods to come up Apr 9 22:19:26.166: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 9 22:19:34.175: INFO: Waiting for deployment "webserver-deployment" to complete Apr 9 22:19:34.180: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 9 22:19:34.186: INFO: Updating deployment webserver-deployment Apr 9 22:19:34.186: INFO: Waiting for observed generation 2 Apr 9 22:19:36.244: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 9 22:19:36.246: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 9 22:19:36.249: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 9 22:19:36.256: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 9 22:19:36.256: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 9 22:19:36.259: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 9 22:19:36.263: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 9 22:19:36.263: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 9 22:19:36.267: INFO: Updating deployment webserver-deployment Apr 9 22:19:36.267: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 9 22:19:36.309: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 9 22:19:36.409: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 9 22:19:38.801: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7686 /apis/apps/v1/namespaces/deployment-7686/deployments/webserver-deployment 31880faa-c822-4b8f-a1b4-aedc0af7387e 6792495 3 2020-04-09 22:19:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0005d5b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-09 22:19:36 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-09 22:19:36 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 9 22:19:38.806: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7686 /apis/apps/v1/namespaces/deployment-7686/replicasets/webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 6792492 3 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 31880faa-c822-4b8f-a1b4-aedc0af7387e 0xc004b60eb7 0xc004b60eb8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b60f28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:19:38.806: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 9 22:19:38.806: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7686 /apis/apps/v1/namespaces/deployment-7686/replicasets/webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 6792486 3 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 31880faa-c822-4b8f-a1b4-aedc0af7387e 0xc004b60df7 0xc004b60df8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b60e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 9 22:19:38.813: INFO: Pod "webserver-deployment-595b5b9587-28zd2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-28zd2 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-28zd2 13c8ae8e-7bc3-4f3b-bcf6-d719b34980da 6792562 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b613f7 0xc004b613f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.813: INFO: Pod "webserver-deployment-595b5b9587-2swcz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2swcz webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-2swcz 1c3c6d70-f889-4ee6-a41b-df84142875a2 6792517 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61557 0xc004b61558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.813: INFO: Pod "webserver-deployment-595b5b9587-4h5nb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4h5nb webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-4h5nb ab7a5a09-94a6-4d98-9ce8-6b769ae98083 6792330 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b616b7 0xc004b616b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.68,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc685b1ce514fb212830c1fe46faa56de252a0b577d01a101ba06f1df823905e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.814: INFO: Pod "webserver-deployment-595b5b9587-5gnv2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5gnv2 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-5gnv2 6b83efc5-992b-4624-b64b-26813ba6340c 6792497 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61837 0xc004b61838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.814: INFO: Pod "webserver-deployment-595b5b9587-6d524" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6d524 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-6d524 3c57f350-75ca-4582-a1e8-53cfa345ac2e 6792318 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61997 0xc004b61998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.229,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://35afd292c1417ee882048b508c02e69340249b065d6bcef4eef7f20bf534e587,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.814: INFO: Pod "webserver-deployment-595b5b9587-8t8d9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8t8d9 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-8t8d9 c0325e81-4c0c-4346-9d3f-d034128f0ba6 6792484 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61b17 0xc004b61b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.814: INFO: Pod "webserver-deployment-595b5b9587-8zxw8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8zxw8 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-8zxw8 a3bef08a-0004-44eb-9a41-1f3c59492333 6792343 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61c77 0xc004b61c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.232,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eaa238b29aab0b911c8eda00d1adcc8205737184ad88243980ec993f4ed90a6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.815: INFO: Pod "webserver-deployment-595b5b9587-9tgwz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9tgwz webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-9tgwz aeff9cca-19df-4e8d-8fd3-2b7c72dcdf48 6792510 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61df7 0xc004b61df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.815: INFO: Pod "webserver-deployment-595b5b9587-b7mbg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b7mbg webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-b7mbg c6e28a44-6cd5-4208-8970-6c1c49c505a8 6792569 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc004b61f57 0xc004b61f58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.815: INFO: Pod "webserver-deployment-595b5b9587-d958z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d958z webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-d958z 61a41c7c-afe2-4a9b-9bed-d063fdd9b0d5 6792508 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a6167 0xc0022a6168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.815: INFO: Pod "webserver-deployment-595b5b9587-dv4jx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dv4jx webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-dv4jx 7905d841-17de-47a8-87e6-3f94f21eb9a0 6792300 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a6537 0xc0022a6538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.228,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bd1eeab86307a485fc494a3ce6584b59df609c9268d96cebf25c37770d8a41d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.815: INFO: Pod "webserver-deployment-595b5b9587-dwsqk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dwsqk webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-dwsqk 48badff6-f50d-4d8d-ba3b-c976b02da61e 6792563 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a67e7 0xc0022a67e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.816: INFO: Pod "webserver-deployment-595b5b9587-fsqn8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fsqn8 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-fsqn8 56b3c1af-047b-465a-8b70-44362a545801 6792353 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a6a97 0xc0022a6a98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.69,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5378a34d60d2637c84dcf7843a2640740e81088cc7ba76de30c5150a6a0edd0b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.816: INFO: Pod "webserver-deployment-595b5b9587-g9ljg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g9ljg webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-g9ljg 791b3619-3391-4a8b-b95d-64270b04b163 6792550 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7227 0xc0022a7228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.816: INFO: Pod "webserver-deployment-595b5b9587-j2jwq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j2jwq webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-j2jwq dc6ff47c-c42d-4029-a018-8df5f396112e 6792537 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7387 0xc0022a7388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.816: INFO: Pod "webserver-deployment-595b5b9587-lhgds" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lhgds webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-lhgds 6d476165-6b0f-4218-9823-9af392f01a80 6792504 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7507 0xc0022a7508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.816: INFO: Pod "webserver-deployment-595b5b9587-q9vr2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q9vr2 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-q9vr2 ca18a58c-63a5-438c-8858-9e40f2eff7a7 6792336 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7667 0xc0022a7668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.231,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cd82e84c6c47648f79bee9c85d3ffb7cc2cf0dbe3c087df5d26bf5ff6659cdd9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.817: INFO: Pod "webserver-deployment-595b5b9587-sfts7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sfts7 webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-sfts7 dabb08be-b1bc-42f9-97c8-4eb5d901f388 6792356 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a77e7 0xc0022a77e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.70,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://612cfb47d50f4f03896e5c2ad3641fcb69fb7daeda67f42d3090f91186a737a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.817: INFO: Pod "webserver-deployment-595b5b9587-sjp2f" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjp2f webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-sjp2f ec4d267c-8571-4cff-b26b-6684d4db249b 6792346 0 2020-04-09 22:19:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7967 0xc0022a7968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.230,StartTime:2020-04-09 22:19:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 22:19:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://745b01085b4f1faa5e632dc27206bbf62fa1ecb32ec194b5828d9766183074e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.818: INFO: Pod "webserver-deployment-595b5b9587-wng5f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wng5f webserver-deployment-595b5b9587- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-595b5b9587-wng5f 07b9a119-6a4f-4b23-8263-a2a3469224c3 6792500 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b61be047-b42f-401f-8801-b500729602bc 0xc0022a7ae7 0xc0022a7ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.818: INFO: Pod "webserver-deployment-c7997dcc8-6b5tr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6b5tr webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-6b5tr 13a43eba-a21e-4542-b7f8-f12c69f4e5c1 6792527 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0022a7c47 0xc0022a7c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.818: INFO: Pod "webserver-deployment-c7997dcc8-8qfq6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8qfq6 webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-8qfq6 dc79a7b0-6335-4f3d-ac39-0ccd95d7be19 6792528 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0022a7dc7 0xc0022a7dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.818: INFO: Pod "webserver-deployment-c7997dcc8-dff2w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dff2w webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-dff2w 24630b86-167e-421f-9bc9-401be0afe677 6792421 0 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0022a7f47 0xc0022a7f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.819: INFO: Pod "webserver-deployment-c7997dcc8-g9cqj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g9cqj webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-g9cqj cb9ed2d5-e889-4664-92c0-496e1c5e33ec 6792533 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f20c7 0xc0053f20c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.819: INFO: Pod "webserver-deployment-c7997dcc8-hsrnz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hsrnz webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-hsrnz 95595f9e-4136-4b91-9ed4-7a740c7bb7cd 6792485 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2247 0xc0053f2248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.819: INFO: Pod "webserver-deployment-c7997dcc8-jcsmq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jcsmq webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-jcsmq 5bf55e6c-8713-4b24-8243-637d9afc985b 6792549 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f23c7 0xc0053f23c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.819: INFO: Pod "webserver-deployment-c7997dcc8-jfj4n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jfj4n webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-jfj4n 363875e7-1908-4c78-9eb6-6c92edd8009a 6792534 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2547 0xc0053f2548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.819: INFO: Pod "webserver-deployment-c7997dcc8-jrx8j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jrx8j webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-jrx8j 02c4434b-a73e-4350-bdec-14c569eb674c 6792422 0 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f26c7 0xc0053f26c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.820: INFO: Pod "webserver-deployment-c7997dcc8-ljmwh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ljmwh webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-ljmwh 15daa9bb-b6d8-415d-bcd4-892458d028c0 6792570 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2857 0xc0053f2858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.820: INFO: Pod "webserver-deployment-c7997dcc8-pnnj2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pnnj2 webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-pnnj2 01de97d5-c7d7-4888-bb8f-9a0aeebcaa22 6792398 0 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f29d7 0xc0053f29d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.820: INFO: Pod "webserver-deployment-c7997dcc8-ssw6d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ssw6d webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-ssw6d 5afbd974-ed56-4f70-a8a2-944764191524 6792515 0 2020-04-09 22:19:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2b57 0xc0053f2b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-09 22:19:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.820: INFO: Pod "webserver-deployment-c7997dcc8-vlxlg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vlxlg webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-vlxlg f08f42bf-4ae5-458e-864a-b47b1f0bbdf8 6792396 0 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2cd7 0xc0053f2cd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 9 22:19:38.820: INFO: Pod "webserver-deployment-c7997dcc8-xrf9r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xrf9r webserver-deployment-c7997dcc8- deployment-7686 /api/v1/namespaces/deployment-7686/pods/webserver-deployment-c7997dcc8-xrf9r f8c70627-be6c-47ed-8a2f-beb260176d14 6792408 0 2020-04-09 22:19:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 12a78efc-1b09-4e98-9aa0-64afbee9dacd 0xc0053f2e57 0xc0053f2e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qt6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qt6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qt6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 22:19:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-09 22:19:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:19:38.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7686" for this suite. • [SLOW TEST:14.902 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":273,"skipped":4389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:19:38.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 9 22:19:39.717: INFO: Waiting up to 5m0s for pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f" in namespace "downward-api-829" to be "success or failure" Apr 9 22:19:40.083: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 365.678083ms Apr 9 22:19:42.217: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499408236s Apr 9 22:19:44.450: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.733000618s Apr 9 22:19:46.630: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912170998s Apr 9 22:19:49.114: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.39636346s Apr 9 22:19:51.133: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.415543498s Apr 9 22:19:53.150: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.432937108s STEP: Saw pod success Apr 9 22:19:53.150: INFO: Pod "downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f" satisfied condition "success or failure" Apr 9 22:19:53.167: INFO: Trying to get logs from node jerma-worker2 pod downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f container dapi-container: STEP: delete the pod Apr 9 22:19:53.612: INFO: Waiting for pod downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f to disappear Apr 9 22:19:53.622: INFO: Pod downward-api-139bf038-95ff-4487-b7cb-46e64a0d285f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:19:53.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-829" for this suite. • [SLOW TEST:15.022 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:19:53.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 9 22:19:54.019: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:20:09.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3825" for this suite. • [SLOW TEST:15.652 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4471,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:20:09.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-2b8bf078-7cf1-469b-a108-705f7ad43937 STEP: Creating a pod to test consume secrets Apr 9 22:20:09.577: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d" in namespace "projected-2592" to be "success or failure" Apr 9 22:20:09.581: INFO: Pod "pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855602ms Apr 9 22:20:11.585: INFO: Pod "pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007643131s Apr 9 22:20:13.590: INFO: Pod "pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012654035s STEP: Saw pod success Apr 9 22:20:13.590: INFO: Pod "pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d" satisfied condition "success or failure" Apr 9 22:20:13.593: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d container projected-secret-volume-test: STEP: delete the pod Apr 9 22:20:13.612: INFO: Waiting for pod pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d to disappear Apr 9 22:20:13.629: INFO: Pod pod-projected-secrets-142eb376-013c-4d3c-a348-41a1ea87e80d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:20:13.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2592" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:20:13.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:20:17.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":277,"skipped":4512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 9 22:20:17.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 9 22:20:18.036: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0464cd46-14e1-427d-bfba-def5a595a2c4", Controller:(*bool)(0xc0057d941a), BlockOwnerDeletion:(*bool)(0xc0057d941b)}} Apr 9 22:20:18.067: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"091220d5-3e55-41b8-8abf-cbd1c84d56b3", Controller:(*bool)(0xc0057d95aa), BlockOwnerDeletion:(*bool)(0xc0057d95ab)}} Apr 9 22:20:18.338: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3fc14aef-83b1-4391-92fb-323d67268a7a", Controller:(*bool)(0xc00566311a), BlockOwnerDeletion:(*bool)(0xc00566311b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 9 22:20:23.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4178" for this suite. • [SLOW TEST:5.608 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":278,"skipped":4536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 9 22:20:23.530: INFO: Running AfterSuite actions on all nodes Apr 9 22:20:23.530: INFO: Running AfterSuite actions on node 1 Apr 9 22:20:23.530: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4371.297 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS