I0411 23:36:43.721016 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0411 23:36:43.721222 7 e2e.go:124] Starting e2e run "4138d95c-a78c-41f0-8ba2-2b0ef16101f3" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586648202 - Will randomize all specs Will run 275 of 4992 specs Apr 11 23:36:43.778: INFO: >>> kubeConfig: /root/.kube/config Apr 11 23:36:43.780: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 23:36:43.805: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 23:36:43.837: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 23:36:43.837: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 23:36:43.837: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 23:36:43.856: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 23:36:43.856: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 23:36:43.856: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 11 23:36:43.858: INFO: kube-apiserver version: v1.17.0 Apr 11 23:36:43.858: INFO: >>> kubeConfig: /root/.kube/config Apr 11 23:36:43.863: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:36:43.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Apr 11 23:36:43.926: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:36:43.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5427" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":1,"skipped":15,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:36:43.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 11 23:36:44.000: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:36:50.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8437" for this suite. • [SLOW TEST:6.645 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":2,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:36:50.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-4ef88243-4eda-4e50-bc71-30c54a31cd2b in namespace container-probe-6460 Apr 11 23:36:54.691: INFO: Started pod liveness-4ef88243-4eda-4e50-bc71-30c54a31cd2b in namespace container-probe-6460 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 23:36:54.694: INFO: Initial restart count of pod liveness-4ef88243-4eda-4e50-bc71-30c54a31cd2b is 0 Apr 11 23:37:12.734: INFO: Restart count of pod container-probe-6460/liveness-4ef88243-4eda-4e50-bc71-30c54a31cd2b is now 1 (18.039930671s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:12.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6460" for this suite. • [SLOW TEST:22.208 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:12.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-74b129dc-aff3-4c5e-92d5-37bfe1fa4ff1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:17.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9055" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":67,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:17.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 11 23:37:18.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 11 23:37:20.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 23:37:22.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245038, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 11 23:37:25.360: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:25.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-352" for this suite. STEP: Destroying namespace "webhook-352-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":5,"skipped":74,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:25.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 11 23:37:25.621: INFO: Waiting up to 5m0s for pod "pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268" in namespace "emptydir-4618" to be "Succeeded or Failed" Apr 11 23:37:25.625: INFO: Pod "pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178697ms Apr 11 23:37:27.631: INFO: Pod "pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010417393s Apr 11 23:37:29.636: INFO: Pod "pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014618537s STEP: Saw pod success Apr 11 23:37:29.636: INFO: Pod "pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268" satisfied condition "Succeeded or Failed" Apr 11 23:37:29.639: INFO: Trying to get logs from node latest-worker2 pod pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268 container test-container: STEP: delete the pod Apr 11 23:37:29.689: INFO: Waiting for pod pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268 to disappear Apr 11 23:37:29.700: INFO: Pod pod-1fe4ebd8-deb6-46e4-a270-d6afa6613268 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:29.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4618" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":86,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:29.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-b42cm in namespace proxy-9690 I0411 23:37:29.835577 7 runners.go:190] Created replication controller with name: proxy-service-b42cm, namespace: proxy-9690, replica count: 1 I0411 23:37:30.885969 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 23:37:31.886156 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 23:37:32.886382 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 23:37:33.886631 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 23:37:34.886855 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 23:37:35.887074 7 runners.go:190] proxy-service-b42cm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 11 23:37:35.890: INFO: setup took 6.117074554s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 6.945161ms) Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 7.017519ms) Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 7.10814ms) Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 7.241984ms) Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 7.144581ms) Apr 11 23:37:35.897: INFO: (0) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 7.335127ms) Apr 11 23:37:35.898: INFO: (0) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 7.574939ms) Apr 11 23:37:35.902: INFO: (0) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 11.392328ms) Apr 11 23:37:35.903: INFO: (0) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 12.823622ms) Apr 11 23:37:35.904: INFO: (0) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 13.522078ms) Apr 11 23:37:35.904: INFO: (0) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 13.485819ms) Apr 11 23:37:35.906: INFO: (0) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 16.158491ms) Apr 11 23:37:35.907: INFO: (0) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 16.635787ms) Apr 11 23:37:35.907: INFO: (0) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 16.459039ms) Apr 11 23:37:35.907: INFO: (0) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 16.596156ms) Apr 11 23:37:35.910: INFO: (0) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test (200; 19.835875ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 19.887564ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 19.897578ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 19.884511ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 20.02843ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 20.067541ms) Apr 11 23:37:35.930: INFO: (1) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test (200; 8.961015ms) Apr 11 23:37:35.942: INFO: (2) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 9.353396ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 11.38508ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 11.560107ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 11.418973ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 11.368538ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 11.4245ms) Apr 11 23:37:35.944: INFO: (2) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 11.372748ms) Apr 11 23:37:35.951: INFO: (2) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 19.010203ms) Apr 11 23:37:35.951: INFO: (2) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 18.918228ms) Apr 11 23:37:35.951: INFO: (2) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 19.074043ms) Apr 11 23:37:35.951: INFO: (2) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 18.976967ms) Apr 11 23:37:35.951: INFO: (2) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 19.08227ms) Apr 11 23:37:35.952: INFO: (2) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 19.220361ms) Apr 11 23:37:35.952: INFO: (2) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 19.219026ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 3.188506ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 3.241048ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.500448ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 3.645558ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.676555ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 3.651832ms) Apr 11 23:37:35.955: INFO: (3) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 3.689864ms) Apr 11 23:37:35.956: INFO: (3) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.819243ms) Apr 11 23:37:35.956: INFO: (3) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 3.948983ms) Apr 11 23:37:35.956: INFO: (3) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.441185ms) Apr 11 23:37:35.957: INFO: (3) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.846831ms) Apr 11 23:37:35.957: INFO: (3) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 4.872261ms) Apr 11 23:37:35.957: INFO: (3) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.216024ms) Apr 11 23:37:35.957: INFO: (3) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 5.588111ms) Apr 11 23:37:35.961: INFO: (4) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 4.898302ms) Apr 11 23:37:35.962: INFO: (4) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 5.044288ms) Apr 11 23:37:35.962: INFO: (4) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.967079ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 5.146445ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 5.12246ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 5.20725ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 5.167441ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 5.677882ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.743667ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 5.73908ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 5.827766ms) Apr 11 23:37:35.963: INFO: (4) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 5.963669ms) Apr 11 23:37:35.966: INFO: (5) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 2.663646ms) Apr 11 23:37:35.967: INFO: (5) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.773544ms) Apr 11 23:37:35.968: INFO: (5) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 5.073661ms) Apr 11 23:37:35.968: INFO: (5) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 5.00821ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 5.062411ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 5.020286ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 5.152871ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 5.335775ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.329512ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 5.298444ms) Apr 11 23:37:35.969: INFO: (5) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 3.57535ms) Apr 11 23:37:35.973: INFO: (6) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.530177ms) Apr 11 23:37:35.973: INFO: (6) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.05773ms) Apr 11 23:37:35.973: INFO: (6) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.281199ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.60263ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.547756ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 4.622734ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.607194ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.602176ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 4.695346ms) Apr 11 23:37:35.974: INFO: (6) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.875467ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.295177ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.346299ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.231454ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.29508ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 4.628876ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.670557ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.640404ms) Apr 11 23:37:35.979: INFO: (7) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 5.659177ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 3.018208ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 3.088839ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.265704ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 3.337972ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.250694ms) Apr 11 23:37:35.983: INFO: (8) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 3.747628ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 3.830185ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 3.87074ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.878559ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 3.961763ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 4.081871ms) Apr 11 23:37:35.984: INFO: (8) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 3.935911ms) Apr 11 23:37:35.985: INFO: (8) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.416603ms) Apr 11 23:37:35.985: INFO: (8) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.722429ms) Apr 11 23:37:35.987: INFO: (9) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 1.779548ms) Apr 11 23:37:35.988: INFO: (9) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 2.839023ms) Apr 11 23:37:35.988: INFO: (9) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 3.025262ms) Apr 11 23:37:35.988: INFO: (9) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.211659ms) Apr 11 23:37:35.988: INFO: (9) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.311143ms) Apr 11 23:37:35.989: INFO: (9) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test (200; 4.595409ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.69652ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.694302ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.726399ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.694072ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.683052ms) Apr 11 23:37:35.990: INFO: (9) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.0086ms) Apr 11 23:37:35.993: INFO: (10) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.38925ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 3.816601ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 3.846684ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.841031ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.068742ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.121337ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.352227ms) Apr 11 23:37:35.994: INFO: (10) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.40345ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.816711ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.814717ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.798914ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.998709ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 5.050923ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 5.05044ms) Apr 11 23:37:35.995: INFO: (10) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.231316ms) Apr 11 23:37:35.998: INFO: (11) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 3.621929ms) Apr 11 23:37:35.999: INFO: (11) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 3.653477ms) Apr 11 23:37:35.999: INFO: (11) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 3.640027ms) Apr 11 23:37:35.999: INFO: (11) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 3.662321ms) Apr 11 23:37:35.999: INFO: (11) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 3.75766ms) Apr 11 23:37:35.999: INFO: (11) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 3.746846ms) Apr 11 23:37:36.000: INFO: (11) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.730273ms) Apr 11 23:37:36.000: INFO: (11) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.735217ms) Apr 11 23:37:36.000: INFO: (11) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.945882ms) Apr 11 23:37:36.001: INFO: (11) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 5.179149ms) Apr 11 23:37:36.001: INFO: (11) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 5.246263ms) Apr 11 23:37:36.008: INFO: (12) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 7.713796ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 7.870944ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 8.007078ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 7.967317ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 8.071808ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 8.172302ms) Apr 11 23:37:36.009: INFO: (12) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 8.892932ms) Apr 11 23:37:36.010: INFO: (12) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 8.929382ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.103478ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 4.178383ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.236542ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.280953ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.246956ms) Apr 11 23:37:36.014: INFO: (13) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.236228ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 4.887485ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test<... (200; 4.994426ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.961354ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.977031ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.932391ms) Apr 11 23:37:36.015: INFO: (13) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.953083ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.130267ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.174676ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.141005ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.209573ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.278002ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.316543ms) Apr 11 23:37:36.019: INFO: (14) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.372146ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.753134ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 4.789476ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.811279ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test (200; 4.757903ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 4.714672ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.739152ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.960015ms) Apr 11 23:37:36.020: INFO: (14) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 5.122916ms) Apr 11 23:37:36.022: INFO: (15) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 2.000119ms) Apr 11 23:37:36.023: INFO: (15) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 2.517195ms) Apr 11 23:37:36.024: INFO: (15) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.092062ms) Apr 11 23:37:36.024: INFO: (15) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.102757ms) Apr 11 23:37:36.024: INFO: (15) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: test (200; 4.165491ms) Apr 11 23:37:36.024: INFO: (15) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 4.145636ms) Apr 11 23:37:36.025: INFO: (15) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.570238ms) Apr 11 23:37:36.025: INFO: (15) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.542107ms) Apr 11 23:37:36.025: INFO: (15) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.589201ms) Apr 11 23:37:36.025: INFO: (15) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.581408ms) Apr 11 23:37:36.025: INFO: (15) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 5.062329ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 2.507878ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 2.569699ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 2.563822ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 2.885074ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 3.046767ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 3.033568ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 3.077584ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:1080/proxy/: ... (200; 3.147484ms) Apr 11 23:37:36.028: INFO: (16) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 4.251442ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.365504ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 4.353958ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.363085ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 4.497562ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.475176ms) Apr 11 23:37:36.047: INFO: (17) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 4.09592ms) Apr 11 23:37:36.054: INFO: (18) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 4.163626ms) Apr 11 23:37:36.054: INFO: (18) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.692532ms) Apr 11 23:37:36.054: INFO: (18) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.708021ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 4.702938ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 4.786734ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 4.713326ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 4.840741ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 4.863297ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 4.6278ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 4.844953ms) Apr 11 23:37:36.055: INFO: (18) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 4.940776ms) Apr 11 23:37:36.057: INFO: (19) /api/v1/namespaces/proxy-9690/pods/http:proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 2.094843ms) Apr 11 23:37:36.057: INFO: (19) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:443/proxy/: ... (200; 4.26129ms) Apr 11 23:37:36.059: INFO: (19) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:460/proxy/: tls baz (200; 4.374247ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:160/proxy/: foo (200; 4.82035ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname1/proxy/: foo (200; 5.223433ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2/proxy/: test (200; 5.32919ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/pods/https:proxy-service-b42cm-vwsv2:462/proxy/: tls qux (200; 5.410321ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname2/proxy/: tls qux (200; 5.592093ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/services/http:proxy-service-b42cm:portname2/proxy/: bar (200; 5.678468ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname2/proxy/: bar (200; 5.593689ms) Apr 11 23:37:36.060: INFO: (19) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:1080/proxy/: test<... (200; 5.624765ms) Apr 11 23:37:36.061: INFO: (19) /api/v1/namespaces/proxy-9690/services/proxy-service-b42cm:portname1/proxy/: foo (200; 5.853428ms) Apr 11 23:37:36.061: INFO: (19) /api/v1/namespaces/proxy-9690/services/https:proxy-service-b42cm:tlsportname1/proxy/: tls baz (200; 6.466324ms) Apr 11 23:37:36.061: INFO: (19) /api/v1/namespaces/proxy-9690/pods/proxy-service-b42cm-vwsv2:162/proxy/: bar (200; 6.498848ms) STEP: deleting ReplicationController proxy-service-b42cm in namespace proxy-9690, will wait for the garbage collector to delete the pods Apr 11 23:37:36.118: INFO: Deleting ReplicationController proxy-service-b42cm took: 5.275498ms Apr 11 23:37:36.419: INFO: Terminating ReplicationController proxy-service-b42cm pods took: 300.27345ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:38.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9690" for this suite. • [SLOW TEST:9.219 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":7,"skipped":94,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:38.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 11 23:37:39.560: INFO: created pod pod-service-account-defaultsa Apr 11 23:37:39.560: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 11 23:37:39.587: INFO: created pod pod-service-account-mountsa Apr 11 23:37:39.587: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 11 23:37:39.611: INFO: created pod pod-service-account-nomountsa Apr 11 23:37:39.611: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 11 23:37:39.626: INFO: created pod pod-service-account-defaultsa-mountspec Apr 11 23:37:39.626: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 11 23:37:39.681: INFO: created pod pod-service-account-mountsa-mountspec Apr 11 23:37:39.681: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 11 23:37:39.729: INFO: created pod pod-service-account-nomountsa-mountspec Apr 11 23:37:39.729: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 11 23:37:39.746: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 11 23:37:39.746: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 11 23:37:39.777: INFO: created pod pod-service-account-mountsa-nomountspec Apr 11 23:37:39.777: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 11 23:37:39.817: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 11 23:37:39.817: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:37:39.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5450" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":8,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:37:39.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8081 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8081 STEP: Creating statefulset with conflicting port in namespace statefulset-8081 STEP: Waiting until pod test-pod will start running in namespace statefulset-8081 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8081 Apr 11 23:37:52.114: INFO: Observed stateful pod in namespace: statefulset-8081, name: ss-0, uid: 036f43a2-8311-4b8b-b167-bbccb861420f, status phase: Pending. Waiting for statefulset controller to delete. Apr 11 23:37:52.735: INFO: Observed stateful pod in namespace: statefulset-8081, name: ss-0, uid: 036f43a2-8311-4b8b-b167-bbccb861420f, status phase: Failed. Waiting for statefulset controller to delete. Apr 11 23:37:52.745: INFO: Observed stateful pod in namespace: statefulset-8081, name: ss-0, uid: 036f43a2-8311-4b8b-b167-bbccb861420f, status phase: Failed. Waiting for statefulset controller to delete. Apr 11 23:37:52.766: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8081 STEP: Removing pod with conflicting port in namespace statefulset-8081 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8081 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 11 23:37:56.856: INFO: Deleting all statefulset in ns statefulset-8081 Apr 11 23:37:56.860: INFO: Scaling statefulset ss to 0 Apr 11 23:38:06.875: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 23:38:06.878: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:38:06.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8081" for this suite. • [SLOW TEST:26.943 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":9,"skipped":131,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:38:06.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 23:38:11.047: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:38:11.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9323" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:38:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7567 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7567;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7567 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7567;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7567.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7567.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7567.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7567.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7567.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7567.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.146.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.146.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.146.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.146.188_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7567 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7567;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7567 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7567;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7567.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7567.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7567.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7567.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7567.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7567.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7567.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7567.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.146.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.146.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.146.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.146.188_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 23:38:17.296: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.300: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.303: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.306: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.309: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.337: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.340: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.343: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.348: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.351: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.354: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.357: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:17.373: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:22.378: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.382: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.386: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.390: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.393: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.397: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.427: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.430: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.434: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.440: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.447: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.450: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:22.470: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:27.378: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.381: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.396: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.399: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.415: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.417: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.420: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.424: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.427: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.430: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.432: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:27.449: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:32.378: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.382: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.389: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.393: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.396: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.413: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.433: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.436: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.438: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.442: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.444: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.447: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.450: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:32.473: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:37.378: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.382: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.399: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.402: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.422: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.425: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.428: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.434: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.438: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:37.458: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:42.377: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.380: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.387: INFO: Unable to read wheezy_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.389: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.392: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.409: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.411: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.414: INFO: Unable to read jessie_udp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567 from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.418: INFO: Unable to read jessie_udp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc from pod dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860: the server could not find the requested resource (get pods dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860) Apr 11 23:38:42.441: INFO: Lookups using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7567 wheezy_tcp@dns-test-service.dns-7567 wheezy_udp@dns-test-service.dns-7567.svc wheezy_tcp@dns-test-service.dns-7567.svc wheezy_udp@_http._tcp.dns-test-service.dns-7567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7567 jessie_tcp@dns-test-service.dns-7567 jessie_udp@dns-test-service.dns-7567.svc jessie_tcp@dns-test-service.dns-7567.svc jessie_udp@_http._tcp.dns-test-service.dns-7567.svc jessie_tcp@_http._tcp.dns-test-service.dns-7567.svc] Apr 11 23:38:47.459: INFO: DNS probes using dns-7567/dns-test-a4ae16ef-d19b-4748-995c-4bd04265c860 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:38:48.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7567" for this suite. • [SLOW TEST:37.250 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":11,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:38:48.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 11 23:38:48.435: INFO: Waiting up to 5m0s for pod "pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f" in namespace "emptydir-9196" to be "Succeeded or Failed" Apr 11 23:38:48.444: INFO: Pod "pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.946257ms Apr 11 23:38:50.529: INFO: Pod "pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09367142s Apr 11 23:38:52.533: INFO: Pod "pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097531312s STEP: Saw pod success Apr 11 23:38:52.533: INFO: Pod "pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f" satisfied condition "Succeeded or Failed" Apr 11 23:38:52.536: INFO: Trying to get logs from node latest-worker2 pod pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f container test-container: STEP: delete the pod Apr 11 23:38:52.748: INFO: Waiting for pod pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f to disappear Apr 11 23:38:52.774: INFO: Pod pod-30b8e78b-2fb6-405d-9a97-b91a6c46936f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:38:52.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9196" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":178,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:38:52.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:39:08.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4778" for this suite. • [SLOW TEST:16.134 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":13,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:39:08.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 11 23:39:08.980: INFO: PodSpec: initContainers in spec.initContainers Apr 11 23:39:53.462: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c41ae363-349a-41ae-9586-82df5951921a", GenerateName:"", Namespace:"init-container-1349", SelfLink:"/api/v1/namespaces/init-container-1349/pods/pod-init-c41ae363-349a-41ae-9586-82df5951921a", UID:"d913303a-09da-4944-9151-3e56f8e09cdc", ResourceVersion:"7328318", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722245148, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"980556792"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5gcqj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00200db40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5gcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5gcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5gcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002931c98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d4e380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002931d20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002931d40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002931d48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002931d4c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245149, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245149, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245149, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245149, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.9", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.9"}}, StartTime:(*v1.Time)(0xc0024f86c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d4e460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d4e4d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e2e86949807a44a2b71db5b3a416e9a92aab18eb7ff75dcff6330097d66cb748", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024f8700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024f86e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002931def)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:39:53.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1349" for this suite. • [SLOW TEST:44.555 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":14,"skipped":234,"failed":0} SSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:39:53.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:39:53.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7201" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":15,"skipped":237,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:39:53.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 11 23:39:54.587: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 11 23:39:56.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245194, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245194, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245194, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722245194, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 11 23:39:59.628: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:39:59.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:40:00.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-723" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.170 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":16,"skipped":244,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:40:01.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:40:01.135: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 11 23:40:01.145: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:01.163: INFO: Number of nodes with available pods: 0 Apr 11 23:40:01.163: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:02.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:02.171: INFO: Number of nodes with available pods: 0 Apr 11 23:40:02.172: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:03.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:03.175: INFO: Number of nodes with available pods: 0 Apr 11 23:40:03.175: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:04.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:04.169: INFO: Number of nodes with available pods: 0 Apr 11 23:40:04.169: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:05.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:05.171: INFO: Number of nodes with available pods: 1 Apr 11 23:40:05.171: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:06.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:06.175: INFO: Number of nodes with available pods: 2 Apr 11 23:40:06.175: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 11 23:40:06.206: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:06.206: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:06.229: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:07.234: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:07.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:07.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:08.233: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:08.233: INFO: Pod daemon-set-cm8jn is not available Apr 11 23:40:08.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:08.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:09.233: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:09.233: INFO: Pod daemon-set-cm8jn is not available Apr 11 23:40:09.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:09.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:10.234: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:10.234: INFO: Pod daemon-set-cm8jn is not available Apr 11 23:40:10.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:10.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:11.234: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:11.234: INFO: Pod daemon-set-cm8jn is not available Apr 11 23:40:11.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:11.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:12.233: INFO: Wrong image for pod: daemon-set-cm8jn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:12.233: INFO: Pod daemon-set-cm8jn is not available Apr 11 23:40:12.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:12.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:13.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:13.234: INFO: Pod daemon-set-r2bwd is not available Apr 11 23:40:13.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:14.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:14.234: INFO: Pod daemon-set-r2bwd is not available Apr 11 23:40:14.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:15.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:15.234: INFO: Pod daemon-set-r2bwd is not available Apr 11 23:40:15.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:16.261: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:16.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:17.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:17.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:18.248: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:18.248: INFO: Pod daemon-set-hjsvv is not available Apr 11 23:40:18.253: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:19.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:19.233: INFO: Pod daemon-set-hjsvv is not available Apr 11 23:40:19.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:20.234: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:20.234: INFO: Pod daemon-set-hjsvv is not available Apr 11 23:40:20.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:21.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:21.233: INFO: Pod daemon-set-hjsvv is not available Apr 11 23:40:21.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:22.233: INFO: Wrong image for pod: daemon-set-hjsvv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 11 23:40:22.233: INFO: Pod daemon-set-hjsvv is not available Apr 11 23:40:22.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:23.233: INFO: Pod daemon-set-vdbm7 is not available Apr 11 23:40:23.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 11 23:40:23.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:23.242: INFO: Number of nodes with available pods: 1 Apr 11 23:40:23.242: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:24.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:24.251: INFO: Number of nodes with available pods: 1 Apr 11 23:40:24.251: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:25.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:25.251: INFO: Number of nodes with available pods: 1 Apr 11 23:40:25.251: INFO: Node latest-worker is running more than one daemon pod Apr 11 23:40:26.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 23:40:26.251: INFO: Number of nodes with available pods: 2 Apr 11 23:40:26.251: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4365, will wait for the garbage collector to delete the pods Apr 11 23:40:26.326: INFO: Deleting DaemonSet.extensions daemon-set took: 6.556442ms Apr 11 23:40:26.426: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.254432ms Apr 11 23:40:33.029: INFO: Number of nodes with available pods: 0 Apr 11 23:40:33.029: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 23:40:33.031: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4365/daemonsets","resourceVersion":"7328607"},"items":null} Apr 11 23:40:33.034: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4365/pods","resourceVersion":"7328607"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:40:33.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4365" for this suite. • [SLOW TEST:32.041 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":17,"skipped":260,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:40:33.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-ab550369-2f34-45e6-9e2f-893cba36feb5 STEP: Creating configMap with name cm-test-opt-upd-b565aae9-5b35-4497-bf1c-e1897f4e7f19 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ab550369-2f34-45e6-9e2f-893cba36feb5 STEP: Updating configmap cm-test-opt-upd-b565aae9-5b35-4497-bf1c-e1897f4e7f19 STEP: Creating configMap with name cm-test-opt-create-c4002cd6-df9e-4002-9237-370712a8d498 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:40:43.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5920" for this suite. • [SLOW TEST:10.216 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":268,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:40:43.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 11 23:40:43.294: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 23:40:43.335: INFO: Waiting for terminating namespaces to be deleted... Apr 11 23:40:43.338: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 11 23:40:43.344: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 11 23:40:43.344: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 23:40:43.344: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 11 23:40:43.344: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 23:40:43.344: INFO: pod-configmaps-a4c86a0e-ca2a-47e3-a2c8-7ce0e3f79268 from configmap-5920 started at 2020-04-11 23:40:33 +0000 UTC (3 container statuses recorded) Apr 11 23:40:43.344: INFO: Container createcm-volume-test ready: true, restart count 0 Apr 11 23:40:43.344: INFO: Container delcm-volume-test ready: true, restart count 0 Apr 11 23:40:43.344: INFO: Container updcm-volume-test ready: true, restart count 0 Apr 11 23:40:43.344: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 11 23:40:43.361: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 11 23:40:43.361: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 23:40:43.361: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 11 23:40:43.361: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a5e0a0fd-28b9-4b56-9675-ee636bb755bb 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-a5e0a0fd-28b9-4b56-9675-ee636bb755bb off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5e0a0fd-28b9-4b56-9675-ee636bb755bb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:45:51.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4819" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.290 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":19,"skipped":283,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:45:51.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 11 23:45:51.645: INFO: Waiting up to 5m0s for pod "pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1" in namespace "emptydir-1472" to be "Succeeded or Failed" Apr 11 23:45:51.648: INFO: Pod "pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617874ms Apr 11 23:45:53.652: INFO: Pod "pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007264958s Apr 11 23:45:55.666: INFO: Pod "pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02149641s STEP: Saw pod success Apr 11 23:45:55.666: INFO: Pod "pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1" satisfied condition "Succeeded or Failed" Apr 11 23:45:55.669: INFO: Trying to get logs from node latest-worker2 pod pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1 container test-container: STEP: delete the pod Apr 11 23:45:55.705: INFO: Waiting for pod pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1 to disappear Apr 11 23:45:55.708: INFO: Pod pod-7bfbd418-497d-476d-b1bd-1803b1cc6ca1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:45:55.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1472" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":287,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:45:55.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 11 23:45:55.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5" in namespace "projected-1045" to be "Succeeded or Failed" Apr 11 23:45:55.812: INFO: Pod "downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.112498ms Apr 11 23:45:57.816: INFO: Pod "downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022376615s Apr 11 23:45:59.820: INFO: Pod "downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026561902s STEP: Saw pod success Apr 11 23:45:59.820: INFO: Pod "downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5" satisfied condition "Succeeded or Failed" Apr 11 23:45:59.823: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5 container client-container: STEP: delete the pod Apr 11 23:45:59.883: INFO: Waiting for pod downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5 to disappear Apr 11 23:45:59.888: INFO: Pod downwardapi-volume-e5d3324a-c214-4b1c-9554-7b391028d8f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:45:59.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1045" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:45:59.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 11 23:46:00.021: INFO: Waiting up to 5m0s for pod "pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b" in namespace "emptydir-4678" to be "Succeeded or Failed" Apr 11 23:46:00.026: INFO: Pod "pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.103608ms Apr 11 23:46:02.029: INFO: Pod "pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008536495s Apr 11 23:46:04.034: INFO: Pod "pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01296336s STEP: Saw pod success Apr 11 23:46:04.034: INFO: Pod "pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b" satisfied condition "Succeeded or Failed" Apr 11 23:46:04.037: INFO: Trying to get logs from node latest-worker pod pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b container test-container: STEP: delete the pod Apr 11 23:46:04.070: INFO: Waiting for pod pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b to disappear Apr 11 23:46:04.092: INFO: Pod pod-63b44d1c-270e-4a0a-9baf-bf90064adb0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:46:04.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4678" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":344,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:46:04.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-4ksq STEP: Creating a pod to test atomic-volume-subpath Apr 11 23:46:04.202: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4ksq" in namespace "subpath-2911" to be "Succeeded or Failed" Apr 11 23:46:04.206: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479969ms Apr 11 23:46:06.210: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007524782s Apr 11 23:46:08.214: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 4.011488585s Apr 11 23:46:10.218: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 6.015828622s Apr 11 23:46:12.222: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 8.019647905s Apr 11 23:46:14.226: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 10.023812239s Apr 11 23:46:16.230: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 12.027935355s Apr 11 23:46:18.235: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 14.032179431s Apr 11 23:46:20.239: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 16.036458121s Apr 11 23:46:22.243: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 18.040430501s Apr 11 23:46:24.247: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 20.044661829s Apr 11 23:46:26.252: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Running", Reason="", readiness=true. Elapsed: 22.049103116s Apr 11 23:46:28.256: INFO: Pod "pod-subpath-test-secret-4ksq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053237962s STEP: Saw pod success Apr 11 23:46:28.256: INFO: Pod "pod-subpath-test-secret-4ksq" satisfied condition "Succeeded or Failed" Apr 11 23:46:28.258: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-4ksq container test-container-subpath-secret-4ksq: STEP: delete the pod Apr 11 23:46:28.285: INFO: Waiting for pod pod-subpath-test-secret-4ksq to disappear Apr 11 23:46:28.291: INFO: Pod pod-subpath-test-secret-4ksq no longer exists STEP: Deleting pod pod-subpath-test-secret-4ksq Apr 11 23:46:28.291: INFO: Deleting pod "pod-subpath-test-secret-4ksq" in namespace "subpath-2911" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:46:28.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2911" for this suite. • [SLOW TEST:24.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":23,"skipped":359,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:46:28.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-8c5x STEP: Creating a pod to test atomic-volume-subpath Apr 11 23:46:28.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8c5x" in namespace "subpath-3490" to be "Succeeded or Failed" Apr 11 23:46:28.415: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119603ms Apr 11 23:46:30.420: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008671804s Apr 11 23:46:32.424: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.012985038s Apr 11 23:46:34.430: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 6.018635421s Apr 11 23:46:36.434: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.023038609s Apr 11 23:46:38.438: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 10.026757545s Apr 11 23:46:40.442: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 12.031266092s Apr 11 23:46:42.447: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 14.035866408s Apr 11 23:46:44.451: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 16.040031099s Apr 11 23:46:46.456: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 18.044572612s Apr 11 23:46:48.463: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 20.051868281s Apr 11 23:46:50.466: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Running", Reason="", readiness=true. Elapsed: 22.055443838s Apr 11 23:46:52.470: INFO: Pod "pod-subpath-test-downwardapi-8c5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058829485s STEP: Saw pod success Apr 11 23:46:52.470: INFO: Pod "pod-subpath-test-downwardapi-8c5x" satisfied condition "Succeeded or Failed" Apr 11 23:46:52.472: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-8c5x container test-container-subpath-downwardapi-8c5x: STEP: delete the pod Apr 11 23:46:52.536: INFO: Waiting for pod pod-subpath-test-downwardapi-8c5x to disappear Apr 11 23:46:52.553: INFO: Pod pod-subpath-test-downwardapi-8c5x no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8c5x Apr 11 23:46:52.553: INFO: Deleting pod "pod-subpath-test-downwardapi-8c5x" in namespace "subpath-3490" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:46:52.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3490" for this suite. • [SLOW TEST:24.237 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":24,"skipped":362,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:46:52.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9718 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9718 STEP: creating replication controller externalsvc in namespace services-9718 I0411 23:46:52.776191 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9718, replica count: 2 I0411 23:46:55.829231 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 23:46:58.829476 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 11 23:46:58.860: INFO: Creating new exec pod Apr 11 23:47:02.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9718 execpodknc4z -- /bin/sh -x -c nslookup clusterip-service' Apr 11 23:47:05.334: INFO: stderr: "I0411 23:47:05.243928 37 log.go:172] (0xc0000e8fd0) (0xc000689540) Create stream\nI0411 23:47:05.243991 37 log.go:172] (0xc0000e8fd0) (0xc000689540) Stream added, broadcasting: 1\nI0411 23:47:05.247821 37 log.go:172] (0xc0000e8fd0) Reply frame received for 1\nI0411 23:47:05.247867 37 log.go:172] (0xc0000e8fd0) (0xc000a240a0) Create stream\nI0411 23:47:05.247880 37 log.go:172] (0xc0000e8fd0) (0xc000a240a0) Stream added, broadcasting: 3\nI0411 23:47:05.248804 37 log.go:172] (0xc0000e8fd0) Reply frame received for 3\nI0411 23:47:05.248847 37 log.go:172] (0xc0000e8fd0) (0xc0006895e0) Create stream\nI0411 23:47:05.248862 37 log.go:172] (0xc0000e8fd0) (0xc0006895e0) Stream added, broadcasting: 5\nI0411 23:47:05.249943 37 log.go:172] (0xc0000e8fd0) Reply frame received for 5\nI0411 23:47:05.314484 37 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0411 23:47:05.314525 37 log.go:172] (0xc0006895e0) (5) Data frame handling\nI0411 23:47:05.314552 37 log.go:172] (0xc0006895e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0411 23:47:05.323740 37 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0411 23:47:05.323764 37 log.go:172] (0xc000a240a0) (3) Data frame handling\nI0411 23:47:05.323777 37 log.go:172] (0xc000a240a0) (3) Data frame sent\nI0411 23:47:05.324788 37 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0411 23:47:05.324824 37 log.go:172] (0xc000a240a0) (3) Data frame handling\nI0411 23:47:05.324859 37 log.go:172] (0xc000a240a0) (3) Data frame sent\nI0411 23:47:05.325309 37 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0411 23:47:05.325375 37 log.go:172] (0xc000a240a0) (3) Data frame handling\nI0411 23:47:05.325606 37 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0411 23:47:05.325648 37 log.go:172] (0xc0006895e0) (5) Data frame handling\nI0411 23:47:05.327557 37 log.go:172] (0xc0000e8fd0) Data frame received for 1\nI0411 23:47:05.327768 37 log.go:172] (0xc000689540) (1) Data frame handling\nI0411 23:47:05.327820 37 log.go:172] (0xc000689540) (1) Data frame sent\nI0411 23:47:05.327876 37 log.go:172] (0xc0000e8fd0) (0xc000689540) Stream removed, broadcasting: 1\nI0411 23:47:05.327915 37 log.go:172] (0xc0000e8fd0) Go away received\nI0411 23:47:05.328334 37 log.go:172] (0xc0000e8fd0) (0xc000689540) Stream removed, broadcasting: 1\nI0411 23:47:05.328358 37 log.go:172] (0xc0000e8fd0) (0xc000a240a0) Stream removed, broadcasting: 3\nI0411 23:47:05.328369 37 log.go:172] (0xc0000e8fd0) (0xc0006895e0) Stream removed, broadcasting: 5\n" Apr 11 23:47:05.334: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9718.svc.cluster.local\tcanonical name = externalsvc.services-9718.svc.cluster.local.\nName:\texternalsvc.services-9718.svc.cluster.local\nAddress: 10.96.102.169\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9718, will wait for the garbage collector to delete the pods Apr 11 23:47:05.394: INFO: Deleting ReplicationController externalsvc took: 6.493428ms Apr 11 23:47:05.694: INFO: Terminating ReplicationController externalsvc pods took: 300.3094ms Apr 11 23:47:13.076: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:13.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9718" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.527 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":25,"skipped":365,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:13.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:13.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1473" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":26,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:13.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2779/secret-test-521f5ea7-219e-4803-8dd9-a288cc2e3d3a STEP: Creating a pod to test consume secrets Apr 11 23:47:13.270: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac" in namespace "secrets-2779" to be "Succeeded or Failed" Apr 11 23:47:13.320: INFO: Pod "pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 50.378935ms Apr 11 23:47:15.324: INFO: Pod "pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054519609s Apr 11 23:47:17.328: INFO: Pod "pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058870575s STEP: Saw pod success Apr 11 23:47:17.329: INFO: Pod "pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac" satisfied condition "Succeeded or Failed" Apr 11 23:47:17.332: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac container env-test: STEP: delete the pod Apr 11 23:47:17.370: INFO: Waiting for pod pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac to disappear Apr 11 23:47:17.380: INFO: Pod pod-configmaps-3ecada30-3da7-44a7-b65b-795dbb2ab5ac no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:17.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2779" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:17.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 11 23:47:17.470: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 11 23:47:22.475: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:22.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5082" for this suite. • [SLOW TEST:5.220 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":28,"skipped":439,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:22.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 11 23:47:22.721: INFO: Waiting up to 5m0s for pod "pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32" in namespace "emptydir-1323" to be "Succeeded or Failed" Apr 11 23:47:22.723: INFO: Pod "pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386782ms Apr 11 23:47:24.727: INFO: Pod "pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00650202s Apr 11 23:47:26.731: INFO: Pod "pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009843442s STEP: Saw pod success Apr 11 23:47:26.731: INFO: Pod "pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32" satisfied condition "Succeeded or Failed" Apr 11 23:47:26.734: INFO: Trying to get logs from node latest-worker2 pod pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32 container test-container: STEP: delete the pod Apr 11 23:47:26.771: INFO: Waiting for pod pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32 to disappear Apr 11 23:47:26.783: INFO: Pod pod-b66444ec-8f70-4aef-b079-ad0cf3d8dc32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:26.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1323" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:26.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-e6ac9b20-9bfb-422c-88da-94e9442b0634 STEP: Creating a pod to test consume secrets Apr 11 23:47:26.880: INFO: Waiting up to 5m0s for pod "pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66" in namespace "secrets-6681" to be "Succeeded or Failed" Apr 11 23:47:26.897: INFO: Pod "pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66": Phase="Pending", Reason="", readiness=false. Elapsed: 16.968815ms Apr 11 23:47:28.901: INFO: Pod "pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021044829s Apr 11 23:47:30.905: INFO: Pod "pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025370538s STEP: Saw pod success Apr 11 23:47:30.905: INFO: Pod "pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66" satisfied condition "Succeeded or Failed" Apr 11 23:47:30.908: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66 container secret-volume-test: STEP: delete the pod Apr 11 23:47:30.957: INFO: Waiting for pod pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66 to disappear Apr 11 23:47:30.969: INFO: Pod pod-secrets-2b82f8a3-e25e-4579-8f1c-3dd7eb6eff66 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:30.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6681" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:30.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:31.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3810" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":31,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:31.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5082f77b-764b-420f-b28b-244d47f10a37 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5082f77b-764b-420f-b28b-244d47f10a37 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:37.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8336" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":542,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:37.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0411 23:47:38.422480 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 23:47:38.422: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:38.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3513" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":33,"skipped":558,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:38.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:47:38.460: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1503" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:42.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 11 23:47:42.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 11 23:47:43.376: INFO: stderr: "" Apr 11 23:47:43.376: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:47:43.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6661" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":35,"skipped":586,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:47:43.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:47:43.568: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Pending, waiting for it to be Running (with Ready = true) Apr 11 23:47:45.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Pending, waiting for it to be Running (with Ready = true) Apr 11 23:47:47.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:49.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:51.575: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:53.584: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:55.574: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:57.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:47:59.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:01.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:03.575: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:05.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:07.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:09.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = false) Apr 11 23:48:11.572: INFO: The status of Pod test-webserver-1db9f859-3dba-4257-a929-1b059e2538aa is Running (Ready = true) Apr 11 23:48:11.575: INFO: Container started at 2020-04-11 23:47:45 +0000 UTC, pod became ready at 2020-04-11 23:48:09 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:48:11.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6795" for this suite. • [SLOW TEST:28.078 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:48:11.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:11.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3679" for this suite. • [SLOW TEST:60.068 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:11.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 23:49:14.793: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:14.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7579" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:14.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 11 23:49:14.946: INFO: Waiting up to 5m0s for pod "var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8" in namespace "var-expansion-3983" to be "Succeeded or Failed" Apr 11 23:49:14.969: INFO: Pod "var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.999793ms Apr 11 23:49:17.028: INFO: Pod "var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082280507s Apr 11 23:49:19.037: INFO: Pod "var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09079422s STEP: Saw pod success Apr 11 23:49:19.037: INFO: Pod "var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8" satisfied condition "Succeeded or Failed" Apr 11 23:49:19.040: INFO: Trying to get logs from node latest-worker pod var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8 container dapi-container: STEP: delete the pod Apr 11 23:49:19.124: INFO: Waiting for pod var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8 to disappear Apr 11 23:49:19.128: INFO: Pod var-expansion-af590ef5-bb73-4ec2-8351-46339b4376c8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:19.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3983" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:19.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-85272d99-331c-4974-aa6d-c52c3eb12bd1 STEP: Creating a pod to test consume secrets Apr 11 23:49:19.202: INFO: Waiting up to 5m0s for pod "pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9" in namespace "secrets-698" to be "Succeeded or Failed" Apr 11 23:49:19.218: INFO: Pod "pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.976012ms Apr 11 23:49:21.220: INFO: Pod "pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018816305s Apr 11 23:49:23.223: INFO: Pod "pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021563747s STEP: Saw pod success Apr 11 23:49:23.223: INFO: Pod "pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9" satisfied condition "Succeeded or Failed" Apr 11 23:49:23.227: INFO: Trying to get logs from node latest-worker pod pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9 container secret-volume-test: STEP: delete the pod Apr 11 23:49:23.287: INFO: Waiting for pod pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9 to disappear Apr 11 23:49:23.293: INFO: Pod pod-secrets-80615f54-8f90-4159-b8a6-784b4a97f4f9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:23.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-698" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":720,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:23.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:49:23.372: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 18.220189ms) Apr 11 23:49:23.375: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.539534ms) Apr 11 23:49:23.378: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.838648ms) Apr 11 23:49:23.382: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.362885ms) Apr 11 23:49:23.385: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.009949ms) Apr 11 23:49:23.406: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 21.224301ms) Apr 11 23:49:23.410: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.687173ms) Apr 11 23:49:23.413: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.196935ms) Apr 11 23:49:23.416: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.374454ms) Apr 11 23:49:23.420: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.63481ms) Apr 11 23:49:23.423: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.085905ms) Apr 11 23:49:23.426: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.172768ms) Apr 11 23:49:23.430: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.468242ms) Apr 11 23:49:23.433: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.365065ms) Apr 11 23:49:23.437: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.579346ms) Apr 11 23:49:23.441: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.623525ms) Apr 11 23:49:23.445: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.10355ms) Apr 11 23:49:23.448: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.10728ms) Apr 11 23:49:23.451: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.091616ms) Apr 11 23:49:23.455: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.855278ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:23.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8941" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":41,"skipped":722,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:23.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:49:23.545: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:29.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3526" for this suite. • [SLOW TEST:6.317 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":42,"skipped":722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:29.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:45.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3913" for this suite. STEP: Destroying namespace "nsdeletetest-7297" for this suite. Apr 11 23:49:45.030: INFO: Namespace nsdeletetest-7297 was already deleted STEP: Destroying namespace "nsdeletetest-6005" for this suite. • [SLOW TEST:15.252 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":43,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:45.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:49:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9615" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":784,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:49:49.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7297 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7297 I0411 23:49:49.270852 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7297, replica count: 2 I0411 23:49:52.321303 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 23:49:55.321561 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 11 23:49:55.321: INFO: Creating new exec pod Apr 11 23:50:00.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7297 execpodh67fc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 11 23:50:00.565: INFO: stderr: "I0411 23:50:00.470515 91 log.go:172] (0xc0006e2630) (0xc0006d7220) Create stream\nI0411 23:50:00.470587 91 log.go:172] (0xc0006e2630) (0xc0006d7220) Stream added, broadcasting: 1\nI0411 23:50:00.473757 91 log.go:172] (0xc0006e2630) Reply frame received for 1\nI0411 23:50:00.473806 91 log.go:172] (0xc0006e2630) (0xc000a78000) Create stream\nI0411 23:50:00.473823 91 log.go:172] (0xc0006e2630) (0xc000a78000) Stream added, broadcasting: 3\nI0411 23:50:00.475151 91 log.go:172] (0xc0006e2630) Reply frame received for 3\nI0411 23:50:00.475185 91 log.go:172] (0xc0006e2630) (0xc0006d7400) Create stream\nI0411 23:50:00.475203 91 log.go:172] (0xc0006e2630) (0xc0006d7400) Stream added, broadcasting: 5\nI0411 23:50:00.476231 91 log.go:172] (0xc0006e2630) Reply frame received for 5\nI0411 23:50:00.558336 91 log.go:172] (0xc0006e2630) Data frame received for 3\nI0411 23:50:00.558384 91 log.go:172] (0xc000a78000) (3) Data frame handling\nI0411 23:50:00.558409 91 log.go:172] (0xc0006e2630) Data frame received for 5\nI0411 23:50:00.558418 91 log.go:172] (0xc0006d7400) (5) Data frame handling\nI0411 23:50:00.558428 91 log.go:172] (0xc0006d7400) (5) Data frame sent\nI0411 23:50:00.558437 91 log.go:172] (0xc0006e2630) Data frame received for 5\nI0411 23:50:00.558445 91 log.go:172] (0xc0006d7400) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0411 23:50:00.559996 91 log.go:172] (0xc0006e2630) Data frame received for 1\nI0411 23:50:00.560021 91 log.go:172] (0xc0006d7220) (1) Data frame handling\nI0411 23:50:00.560045 91 log.go:172] (0xc0006d7220) (1) Data frame sent\nI0411 23:50:00.560073 91 log.go:172] (0xc0006e2630) (0xc0006d7220) Stream removed, broadcasting: 1\nI0411 23:50:00.560148 91 log.go:172] (0xc0006e2630) Go away received\nI0411 23:50:00.560447 91 log.go:172] (0xc0006e2630) (0xc0006d7220) Stream removed, broadcasting: 1\nI0411 23:50:00.560467 91 log.go:172] (0xc0006e2630) (0xc000a78000) Stream removed, broadcasting: 3\nI0411 23:50:00.560476 91 log.go:172] (0xc0006e2630) (0xc0006d7400) Stream removed, broadcasting: 5\n" Apr 11 23:50:00.565: INFO: stdout: "" Apr 11 23:50:00.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7297 execpodh67fc -- /bin/sh -x -c nc -zv -t -w 2 10.96.61.86 80' Apr 11 23:50:00.755: INFO: stderr: "I0411 23:50:00.696594 113 log.go:172] (0xc0005a3970) (0xc000705360) Create stream\nI0411 23:50:00.696682 113 log.go:172] (0xc0005a3970) (0xc000705360) Stream added, broadcasting: 1\nI0411 23:50:00.700483 113 log.go:172] (0xc0005a3970) Reply frame received for 1\nI0411 23:50:00.700544 113 log.go:172] (0xc0005a3970) (0xc000705540) Create stream\nI0411 23:50:00.700570 113 log.go:172] (0xc0005a3970) (0xc000705540) Stream added, broadcasting: 3\nI0411 23:50:00.701806 113 log.go:172] (0xc0005a3970) Reply frame received for 3\nI0411 23:50:00.701869 113 log.go:172] (0xc0005a3970) (0xc000b92000) Create stream\nI0411 23:50:00.701890 113 log.go:172] (0xc0005a3970) (0xc000b92000) Stream added, broadcasting: 5\nI0411 23:50:00.703009 113 log.go:172] (0xc0005a3970) Reply frame received for 5\nI0411 23:50:00.747973 113 log.go:172] (0xc0005a3970) Data frame received for 5\nI0411 23:50:00.748020 113 log.go:172] (0xc000b92000) (5) Data frame handling\nI0411 23:50:00.748065 113 log.go:172] (0xc000b92000) (5) Data frame sent\nI0411 23:50:00.748093 113 log.go:172] (0xc0005a3970) Data frame received for 5\nI0411 23:50:00.748110 113 log.go:172] (0xc000b92000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.61.86 80\nConnection to 10.96.61.86 80 port [tcp/http] succeeded!\nI0411 23:50:00.748221 113 log.go:172] (0xc0005a3970) Data frame received for 3\nI0411 23:50:00.748265 113 log.go:172] (0xc000705540) (3) Data frame handling\nI0411 23:50:00.749898 113 log.go:172] (0xc0005a3970) Data frame received for 1\nI0411 23:50:00.749928 113 log.go:172] (0xc000705360) (1) Data frame handling\nI0411 23:50:00.749943 113 log.go:172] (0xc000705360) (1) Data frame sent\nI0411 23:50:00.749964 113 log.go:172] (0xc0005a3970) (0xc000705360) Stream removed, broadcasting: 1\nI0411 23:50:00.750011 113 log.go:172] (0xc0005a3970) Go away received\nI0411 23:50:00.750341 113 log.go:172] (0xc0005a3970) (0xc000705360) Stream removed, broadcasting: 1\nI0411 23:50:00.750364 113 log.go:172] (0xc0005a3970) (0xc000705540) Stream removed, broadcasting: 3\nI0411 23:50:00.750378 113 log.go:172] (0xc0005a3970) (0xc000b92000) Stream removed, broadcasting: 5\n" Apr 11 23:50:00.755: INFO: stdout: "" Apr 11 23:50:00.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7297 execpodh67fc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32748' Apr 11 23:50:00.996: INFO: stderr: "I0411 23:50:00.900972 134 log.go:172] (0xc000912630) (0xc00081d2c0) Create stream\nI0411 23:50:00.901042 134 log.go:172] (0xc000912630) (0xc00081d2c0) Stream added, broadcasting: 1\nI0411 23:50:00.904563 134 log.go:172] (0xc000912630) Reply frame received for 1\nI0411 23:50:00.904628 134 log.go:172] (0xc000912630) (0xc000a96000) Create stream\nI0411 23:50:00.904649 134 log.go:172] (0xc000912630) (0xc000a96000) Stream added, broadcasting: 3\nI0411 23:50:00.905949 134 log.go:172] (0xc000912630) Reply frame received for 3\nI0411 23:50:00.905990 134 log.go:172] (0xc000912630) (0xc00081d540) Create stream\nI0411 23:50:00.906003 134 log.go:172] (0xc000912630) (0xc00081d540) Stream added, broadcasting: 5\nI0411 23:50:00.906950 134 log.go:172] (0xc000912630) Reply frame received for 5\nI0411 23:50:00.989518 134 log.go:172] (0xc000912630) Data frame received for 5\nI0411 23:50:00.989545 134 log.go:172] (0xc00081d540) (5) Data frame handling\nI0411 23:50:00.989560 134 log.go:172] (0xc00081d540) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32748\nI0411 23:50:00.989673 134 log.go:172] (0xc000912630) Data frame received for 5\nI0411 23:50:00.989699 134 log.go:172] (0xc00081d540) (5) Data frame handling\nI0411 23:50:00.989727 134 log.go:172] (0xc00081d540) (5) Data frame sent\nConnection to 172.17.0.13 32748 port [tcp/32748] succeeded!\nI0411 23:50:00.990080 134 log.go:172] (0xc000912630) Data frame received for 5\nI0411 23:50:00.990094 134 log.go:172] (0xc00081d540) (5) Data frame handling\nI0411 23:50:00.990247 134 log.go:172] (0xc000912630) Data frame received for 3\nI0411 23:50:00.990262 134 log.go:172] (0xc000a96000) (3) Data frame handling\nI0411 23:50:00.991663 134 log.go:172] (0xc000912630) Data frame received for 1\nI0411 23:50:00.991681 134 log.go:172] (0xc00081d2c0) (1) Data frame handling\nI0411 23:50:00.991692 134 log.go:172] (0xc00081d2c0) (1) Data frame sent\nI0411 23:50:00.991705 134 log.go:172] (0xc000912630) (0xc00081d2c0) Stream removed, broadcasting: 1\nI0411 23:50:00.991724 134 log.go:172] (0xc000912630) Go away received\nI0411 23:50:00.992145 134 log.go:172] (0xc000912630) (0xc00081d2c0) Stream removed, broadcasting: 1\nI0411 23:50:00.992165 134 log.go:172] (0xc000912630) (0xc000a96000) Stream removed, broadcasting: 3\nI0411 23:50:00.992175 134 log.go:172] (0xc000912630) (0xc00081d540) Stream removed, broadcasting: 5\n" Apr 11 23:50:00.996: INFO: stdout: "" Apr 11 23:50:00.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7297 execpodh67fc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32748' Apr 11 23:50:01.203: INFO: stderr: "I0411 23:50:01.128575 155 log.go:172] (0xc00098a0b0) (0xc000a620a0) Create stream\nI0411 23:50:01.128643 155 log.go:172] (0xc00098a0b0) (0xc000a620a0) Stream added, broadcasting: 1\nI0411 23:50:01.133318 155 log.go:172] (0xc00098a0b0) Reply frame received for 1\nI0411 23:50:01.133396 155 log.go:172] (0xc00098a0b0) (0xc0005f5720) Create stream\nI0411 23:50:01.133434 155 log.go:172] (0xc00098a0b0) (0xc0005f5720) Stream added, broadcasting: 3\nI0411 23:50:01.134507 155 log.go:172] (0xc00098a0b0) Reply frame received for 3\nI0411 23:50:01.134560 155 log.go:172] (0xc00098a0b0) (0xc0009500a0) Create stream\nI0411 23:50:01.134589 155 log.go:172] (0xc00098a0b0) (0xc0009500a0) Stream added, broadcasting: 5\nI0411 23:50:01.135529 155 log.go:172] (0xc00098a0b0) Reply frame received for 5\nI0411 23:50:01.196132 155 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0411 23:50:01.196158 155 log.go:172] (0xc0009500a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32748\nConnection to 172.17.0.12 32748 port [tcp/32748] succeeded!\nI0411 23:50:01.196180 155 log.go:172] (0xc00098a0b0) Data frame received for 3\nI0411 23:50:01.196200 155 log.go:172] (0xc0005f5720) (3) Data frame handling\nI0411 23:50:01.196218 155 log.go:172] (0xc0009500a0) (5) Data frame sent\nI0411 23:50:01.196228 155 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0411 23:50:01.196234 155 log.go:172] (0xc0009500a0) (5) Data frame handling\nI0411 23:50:01.198263 155 log.go:172] (0xc00098a0b0) Data frame received for 1\nI0411 23:50:01.198290 155 log.go:172] (0xc000a620a0) (1) Data frame handling\nI0411 23:50:01.198310 155 log.go:172] (0xc000a620a0) (1) Data frame sent\nI0411 23:50:01.198326 155 log.go:172] (0xc00098a0b0) (0xc000a620a0) Stream removed, broadcasting: 1\nI0411 23:50:01.198345 155 log.go:172] (0xc00098a0b0) Go away received\nI0411 23:50:01.198640 155 log.go:172] (0xc00098a0b0) (0xc000a620a0) Stream removed, broadcasting: 1\nI0411 23:50:01.198655 155 log.go:172] (0xc00098a0b0) (0xc0005f5720) Stream removed, broadcasting: 3\nI0411 23:50:01.198663 155 log.go:172] (0xc00098a0b0) (0xc0009500a0) Stream removed, broadcasting: 5\n" Apr 11 23:50:01.203: INFO: stdout: "" Apr 11 23:50:01.203: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:50:01.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7297" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.125 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":45,"skipped":784,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:50:01.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8c5afeca-9ad6-44f5-af55-d8a9ec0a3080 in namespace container-probe-5670 Apr 11 23:50:05.340: INFO: Started pod liveness-8c5afeca-9ad6-44f5-af55-d8a9ec0a3080 in namespace container-probe-5670 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 23:50:05.344: INFO: Initial restart count of pod liveness-8c5afeca-9ad6-44f5-af55-d8a9ec0a3080 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:54:05.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5670" for this suite. • [SLOW TEST:244.628 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":790,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:54:05.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:54:05.993: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9b09e7e8-4c25-42c2-a71c-c6fd7afd4791" in namespace "security-context-test-8311" to be "Succeeded or Failed" Apr 11 23:54:06.002: INFO: Pod "alpine-nnp-false-9b09e7e8-4c25-42c2-a71c-c6fd7afd4791": Phase="Pending", Reason="", readiness=false. Elapsed: 8.968714ms Apr 11 23:54:08.007: INFO: Pod "alpine-nnp-false-9b09e7e8-4c25-42c2-a71c-c6fd7afd4791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013243786s Apr 11 23:54:10.010: INFO: Pod "alpine-nnp-false-9b09e7e8-4c25-42c2-a71c-c6fd7afd4791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016745319s Apr 11 23:54:10.010: INFO: Pod "alpine-nnp-false-9b09e7e8-4c25-42c2-a71c-c6fd7afd4791" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:54:10.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8311" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":797,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:54:10.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:54:10.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 11 23:54:10.640: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:10Z generation:1 name:name1 resourceVersion:7331956 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b876a1b6-cedf-4487-ae89-788e180062b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 11 23:54:20.650: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:20Z generation:1 name:name2 resourceVersion:7332006 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:027cdc79-01da-44f6-8c17-2e71f382c9bb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 11 23:54:30.655: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:10Z generation:2 name:name1 resourceVersion:7332034 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b876a1b6-cedf-4487-ae89-788e180062b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 11 23:54:40.662: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:20Z generation:2 name:name2 resourceVersion:7332064 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:027cdc79-01da-44f6-8c17-2e71f382c9bb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 11 23:54:50.669: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:10Z generation:2 name:name1 resourceVersion:7332094 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b876a1b6-cedf-4487-ae89-788e180062b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 11 23:55:00.702: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-11T23:54:20Z generation:2 name:name2 resourceVersion:7332125 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:027cdc79-01da-44f6-8c17-2e71f382c9bb] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:55:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1732" for this suite. • [SLOW TEST:61.184 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":48,"skipped":797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:55:11.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:55:11.342: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"642fd432-6b00-4981-9dd2-2f7b0e8fb925", Controller:(*bool)(0xc002f935c6), BlockOwnerDeletion:(*bool)(0xc002f935c7)}} Apr 11 23:55:11.368: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"838b4ff9-9366-46e1-a778-697a02a7019b", Controller:(*bool)(0xc00246cc16), BlockOwnerDeletion:(*bool)(0xc00246cc17)}} Apr 11 23:55:11.395: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7c19468e-169a-4283-899e-9fc17ff6c197", Controller:(*bool)(0xc00271f016), BlockOwnerDeletion:(*bool)(0xc00271f017)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:55:16.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-924" for this suite. • [SLOW TEST:5.255 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":49,"skipped":826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:55:16.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-2c02df47-4d98-412e-8060-d09c307ec08c STEP: Creating secret with name s-test-opt-upd-979c3e7e-695c-47c2-abbe-40f5f862991d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2c02df47-4d98-412e-8060-d09c307ec08c STEP: Updating secret s-test-opt-upd-979c3e7e-695c-47c2-abbe-40f5f862991d STEP: Creating secret with name s-test-opt-create-617e1d8f-00e2-4811-9d78-1067f9e3fd18 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:37.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2714" for this suite. • [SLOW TEST:80.537 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:37.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-5f23d007-5965-4501-a581-a3877943c707 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:37.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9730" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":51,"skipped":871,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:37.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 11 23:56:41.668: INFO: Successfully updated pod "adopt-release-5j62r" STEP: Checking that the Job readopts the Pod Apr 11 23:56:41.668: INFO: Waiting up to 15m0s for pod "adopt-release-5j62r" in namespace "job-5137" to be "adopted" Apr 11 23:56:41.671: INFO: Pod "adopt-release-5j62r": Phase="Running", Reason="", readiness=true. Elapsed: 3.109436ms Apr 11 23:56:43.676: INFO: Pod "adopt-release-5j62r": Phase="Running", Reason="", readiness=true. Elapsed: 2.007463487s Apr 11 23:56:43.676: INFO: Pod "adopt-release-5j62r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 11 23:56:44.184: INFO: Successfully updated pod "adopt-release-5j62r" STEP: Checking that the Job releases the Pod Apr 11 23:56:44.184: INFO: Waiting up to 15m0s for pod "adopt-release-5j62r" in namespace "job-5137" to be "released" Apr 11 23:56:44.208: INFO: Pod "adopt-release-5j62r": Phase="Running", Reason="", readiness=true. Elapsed: 23.674697ms Apr 11 23:56:46.212: INFO: Pod "adopt-release-5j62r": Phase="Running", Reason="", readiness=true. Elapsed: 2.027989765s Apr 11 23:56:46.212: INFO: Pod "adopt-release-5j62r" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:46.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5137" for this suite. • [SLOW TEST:9.121 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":52,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:46.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-8afc9ec5-7625-4199-8c61-bb38065ff0ac STEP: Creating a pod to test consume configMaps Apr 11 23:56:46.314: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63" in namespace "configmap-5715" to be "Succeeded or Failed" Apr 11 23:56:46.331: INFO: Pod "pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63": Phase="Pending", Reason="", readiness=false. Elapsed: 16.802506ms Apr 11 23:56:48.335: INFO: Pod "pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020713395s Apr 11 23:56:50.339: INFO: Pod "pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025289726s STEP: Saw pod success Apr 11 23:56:50.340: INFO: Pod "pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63" satisfied condition "Succeeded or Failed" Apr 11 23:56:50.342: INFO: Trying to get logs from node latest-worker pod pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63 container configmap-volume-test: STEP: delete the pod Apr 11 23:56:50.360: INFO: Waiting for pod pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63 to disappear Apr 11 23:56:50.371: INFO: Pod pod-configmaps-fd3e403c-debd-42a7-9831-d2cca5bb7a63 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:50.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5715" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":909,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:50.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 11 23:56:50.475: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 11 23:56:59.528: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:59.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2390" for this suite. • [SLOW TEST:9.162 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:59.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-4652762e-46e9-4a11-8d99-e76dfbbe80ef [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:56:59.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4756" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":55,"skipped":948,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:56:59.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 11 23:56:59.716: INFO: Waiting up to 5m0s for pod "downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a" in namespace "downward-api-8236" to be "Succeeded or Failed" Apr 11 23:56:59.720: INFO: Pod "downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822232ms Apr 11 23:57:01.723: INFO: Pod "downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007218362s Apr 11 23:57:03.728: INFO: Pod "downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011786159s STEP: Saw pod success Apr 11 23:57:03.728: INFO: Pod "downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a" satisfied condition "Succeeded or Failed" Apr 11 23:57:03.731: INFO: Trying to get logs from node latest-worker2 pod downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a container dapi-container: STEP: delete the pod Apr 11 23:57:03.797: INFO: Waiting for pod downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a to disappear Apr 11 23:57:03.803: INFO: Pod downward-api-cb4fccec-5544-43c8-805b-6ad8dfbae17a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:03.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8236" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:03.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 11 23:57:03.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5476 /api/v1/namespaces/watch-5476/configmaps/e2e-watch-test-watch-closed b715c00f-ae83-4aa3-9745-f79bc78c169d 7332727 0 2020-04-11 23:57:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 11 23:57:03.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5476 /api/v1/namespaces/watch-5476/configmaps/e2e-watch-test-watch-closed b715c00f-ae83-4aa3-9745-f79bc78c169d 7332728 0 2020-04-11 23:57:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 11 23:57:03.883: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5476 /api/v1/namespaces/watch-5476/configmaps/e2e-watch-test-watch-closed b715c00f-ae83-4aa3-9745-f79bc78c169d 7332729 0 2020-04-11 23:57:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 11 23:57:03.883: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5476 /api/v1/namespaces/watch-5476/configmaps/e2e-watch-test-watch-closed b715c00f-ae83-4aa3-9745-f79bc78c169d 7332730 0 2020-04-11 23:57:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:03.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5476" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":57,"skipped":1024,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:03.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 11 23:57:08.512: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3310 pod-service-account-562e4a8e-b1cf-4539-9e16-9df80b08905f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 11 23:57:11.171: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3310 pod-service-account-562e4a8e-b1cf-4539-9e16-9df80b08905f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 11 23:57:11.363: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3310 pod-service-account-562e4a8e-b1cf-4539-9e16-9df80b08905f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:11.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3310" for this suite. • [SLOW TEST:7.684 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":58,"skipped":1028,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:11.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:57:11.685: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 11 23:57:11.695: INFO: Number of nodes with available pods: 0 Apr 11 23:57:11.695: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 11 23:57:11.750: INFO: Number of nodes with available pods: 0 Apr 11 23:57:11.750: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:12.754: INFO: Number of nodes with available pods: 0 Apr 11 23:57:12.754: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:13.754: INFO: Number of nodes with available pods: 0 Apr 11 23:57:13.754: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:14.755: INFO: Number of nodes with available pods: 1 Apr 11 23:57:14.755: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 11 23:57:14.832: INFO: Number of nodes with available pods: 1 Apr 11 23:57:14.832: INFO: Number of running nodes: 0, number of available pods: 1 Apr 11 23:57:15.835: INFO: Number of nodes with available pods: 0 Apr 11 23:57:15.835: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 11 23:57:15.851: INFO: Number of nodes with available pods: 0 Apr 11 23:57:15.851: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:16.854: INFO: Number of nodes with available pods: 0 Apr 11 23:57:16.854: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:17.855: INFO: Number of nodes with available pods: 0 Apr 11 23:57:17.855: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:18.855: INFO: Number of nodes with available pods: 0 Apr 11 23:57:18.855: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:19.858: INFO: Number of nodes with available pods: 0 Apr 11 23:57:19.858: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:20.855: INFO: Number of nodes with available pods: 0 Apr 11 23:57:20.855: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:21.856: INFO: Number of nodes with available pods: 0 Apr 11 23:57:21.856: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:22.855: INFO: Number of nodes with available pods: 0 Apr 11 23:57:22.855: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:23.855: INFO: Number of nodes with available pods: 0 Apr 11 23:57:23.855: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:24.856: INFO: Number of nodes with available pods: 0 Apr 11 23:57:24.856: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:25.854: INFO: Number of nodes with available pods: 0 Apr 11 23:57:25.854: INFO: Node latest-worker2 is running more than one daemon pod Apr 11 23:57:26.855: INFO: Number of nodes with available pods: 1 Apr 11 23:57:26.855: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-550, will wait for the garbage collector to delete the pods Apr 11 23:57:26.921: INFO: Deleting DaemonSet.extensions daemon-set took: 6.693604ms Apr 11 23:57:27.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.214282ms Apr 11 23:57:33.124: INFO: Number of nodes with available pods: 0 Apr 11 23:57:33.124: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 23:57:33.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-550/daemonsets","resourceVersion":"7332933"},"items":null} Apr 11 23:57:33.129: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-550/pods","resourceVersion":"7332933"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:33.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-550" for this suite. • [SLOW TEST:21.592 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":59,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:33.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 11 23:57:33.284: INFO: Waiting up to 5m0s for pod "pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06" in namespace "emptydir-2286" to be "Succeeded or Failed" Apr 11 23:57:33.323: INFO: Pod "pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06": Phase="Pending", Reason="", readiness=false. Elapsed: 39.731689ms Apr 11 23:57:35.328: INFO: Pod "pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043927052s Apr 11 23:57:37.332: INFO: Pod "pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048733919s STEP: Saw pod success Apr 11 23:57:37.333: INFO: Pod "pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06" satisfied condition "Succeeded or Failed" Apr 11 23:57:37.336: INFO: Trying to get logs from node latest-worker pod pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06 container test-container: STEP: delete the pod Apr 11 23:57:37.379: INFO: Waiting for pod pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06 to disappear Apr 11 23:57:37.384: INFO: Pod pod-3f01d76d-41eb-4556-bfba-9c7b5e764e06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:37.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2286" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1055,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:37.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 11 23:57:37.440: INFO: Waiting up to 5m0s for pod "pod-c1136f41-2fce-4057-9ca7-cbf59785223a" in namespace "emptydir-1159" to be "Succeeded or Failed" Apr 11 23:57:37.444: INFO: Pod "pod-c1136f41-2fce-4057-9ca7-cbf59785223a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266327ms Apr 11 23:57:39.448: INFO: Pod "pod-c1136f41-2fce-4057-9ca7-cbf59785223a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00857866s Apr 11 23:57:41.453: INFO: Pod "pod-c1136f41-2fce-4057-9ca7-cbf59785223a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013161138s STEP: Saw pod success Apr 11 23:57:41.453: INFO: Pod "pod-c1136f41-2fce-4057-9ca7-cbf59785223a" satisfied condition "Succeeded or Failed" Apr 11 23:57:41.456: INFO: Trying to get logs from node latest-worker pod pod-c1136f41-2fce-4057-9ca7-cbf59785223a container test-container: STEP: delete the pod Apr 11 23:57:41.475: INFO: Waiting for pod pod-c1136f41-2fce-4057-9ca7-cbf59785223a to disappear Apr 11 23:57:41.492: INFO: Pod pod-c1136f41-2fce-4057-9ca7-cbf59785223a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:41.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1159" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:41.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 11 23:57:42.110: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 11 23:57:44.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246262, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246262, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246262, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246262, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 11 23:57:47.227: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:57:47.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1381-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:48.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1771" for this suite. STEP: Destroying namespace "webhook-1771-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":62,"skipped":1085,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:48.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 11 23:57:52.696: INFO: &Pod{ObjectMeta:{send-events-e75b2278-fa43-46bc-bd09-07b3deee2083 events-521 /api/v1/namespaces/events-521/pods/send-events-e75b2278-fa43-46bc-bd09-07b3deee2083 16f2ac5c-1e3f-4034-9b39-6a37bbed88fe 7333129 0 2020-04-11 23:57:48 +0000 UTC map[name:foo time:637511816] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m4trg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m4trg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m4trg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-11 23:57:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-11 23:57:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-11 23:57:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-11 23:57:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.12,StartTime:2020-04-11 23:57:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-11 23:57:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ee9567210780b785d6b80c4bb58f7b7c3986ca0e70c7430ad31b6c3f34c7539a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 11 23:57:54.702: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 11 23:57:56.706: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:57:56.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-521" for this suite. • [SLOW TEST:8.293 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":63,"skipped":1086,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:57:56.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8921/configmap-test-854329e4-5ace-41cd-9cda-6ccddba2aeaf STEP: Creating a pod to test consume configMaps Apr 11 23:57:56.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f" in namespace "configmap-8921" to be "Succeeded or Failed" Apr 11 23:57:56.981: INFO: Pod "pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f": Phase="Pending", Reason="", readiness=false. Elapsed: 43.153654ms Apr 11 23:57:58.985: INFO: Pod "pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047033354s Apr 11 23:58:00.990: INFO: Pod "pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051423208s STEP: Saw pod success Apr 11 23:58:00.990: INFO: Pod "pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f" satisfied condition "Succeeded or Failed" Apr 11 23:58:00.993: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f container env-test: STEP: delete the pod Apr 11 23:58:01.027: INFO: Waiting for pod pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f to disappear Apr 11 23:58:01.032: INFO: Pod pod-configmaps-d07582f5-6a36-4865-9202-2c6285b2bc6f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:01.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8921" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1087,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:01.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 11 23:58:01.081: INFO: >>> kubeConfig: /root/.kube/config Apr 11 23:58:03.983: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:14.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9091" for this suite. • [SLOW TEST:13.518 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":65,"skipped":1105,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:14.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 11 23:58:14.631: INFO: Waiting up to 5m0s for pod "var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9" in namespace "var-expansion-4476" to be "Succeeded or Failed" Apr 11 23:58:14.637: INFO: Pod "var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106183ms Apr 11 23:58:16.641: INFO: Pod "var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010335308s Apr 11 23:58:18.646: INFO: Pod "var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01477181s STEP: Saw pod success Apr 11 23:58:18.646: INFO: Pod "var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9" satisfied condition "Succeeded or Failed" Apr 11 23:58:18.649: INFO: Trying to get logs from node latest-worker pod var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9 container dapi-container: STEP: delete the pod Apr 11 23:58:18.714: INFO: Waiting for pod var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9 to disappear Apr 11 23:58:18.721: INFO: Pod var-expansion-2c035ac6-8a8d-44df-9fb9-c0bd43b17cb9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:18.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4476" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1109,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:18.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 11 23:58:26.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 23:58:26.842: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 23:58:28.842: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 23:58:28.846: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 23:58:30.842: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 23:58:30.846: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:30.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4827" for this suite. • [SLOW TEST:12.136 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1110,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:30.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-683abd03-5e18-4f54-a7ea-deec5c41745d STEP: Creating a pod to test consume secrets Apr 11 23:58:30.969: INFO: Waiting up to 5m0s for pod "pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a" in namespace "secrets-1236" to be "Succeeded or Failed" Apr 11 23:58:30.974: INFO: Pod "pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650747ms Apr 11 23:58:32.986: INFO: Pod "pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016785331s Apr 11 23:58:34.990: INFO: Pod "pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021293679s STEP: Saw pod success Apr 11 23:58:34.990: INFO: Pod "pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a" satisfied condition "Succeeded or Failed" Apr 11 23:58:34.993: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a container secret-volume-test: STEP: delete the pod Apr 11 23:58:35.034: INFO: Waiting for pod pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a to disappear Apr 11 23:58:35.046: INFO: Pod pod-secrets-a6a51391-ee56-4898-aa1e-63b2f8bcfb9a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:35.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1236" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1116,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:35.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 11 23:58:35.683: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 11 23:58:37.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246315, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246315, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246315, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246315, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 11 23:58:40.710: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:58:40.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:41.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3197" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.896 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":69,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:41.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 11 23:58:43.054: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 11 23:58:45.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246323, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246323, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246323, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246323, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 11 23:58:48.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 11 23:58:48.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3281-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:49.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8486" for this suite. STEP: Destroying namespace "webhook-8486-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":70,"skipped":1138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:49.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 11 23:58:53.963: INFO: Successfully updated pod "labelsupdate7dd2a595-03b5-4a06-8ecf-3a1f915a0f88" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:58:55.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7718" for this suite. • [SLOW TEST:6.633 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1167,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:58:55.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9399 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 11 23:58:56.038: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 11 23:58:56.085: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 11 23:58:58.119: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 11 23:59:00.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:02.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:04.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:06.089: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:08.089: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:10.092: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:12.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:14.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:16.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 11 23:59:18.090: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 11 23:59:18.095: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 11 23:59:20.099: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 11 23:59:24.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.17:8080/dial?request=hostname&protocol=http&host=10.244.2.16&port=8080&tries=1'] Namespace:pod-network-test-9399 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 23:59:24.123: INFO: >>> kubeConfig: /root/.kube/config I0411 23:59:24.169990 7 log.go:172] (0xc002c9ee70) (0xc002337900) Create stream I0411 23:59:24.170026 7 log.go:172] (0xc002c9ee70) (0xc002337900) Stream added, broadcasting: 1 I0411 23:59:24.172021 7 log.go:172] (0xc002c9ee70) Reply frame received for 1 I0411 23:59:24.172048 7 log.go:172] (0xc002c9ee70) (0xc0023379a0) Create stream I0411 23:59:24.172056 7 log.go:172] (0xc002c9ee70) (0xc0023379a0) Stream added, broadcasting: 3 I0411 23:59:24.172821 7 log.go:172] (0xc002c9ee70) Reply frame received for 3 I0411 23:59:24.172850 7 log.go:172] (0xc002c9ee70) (0xc002337a40) Create stream I0411 23:59:24.172861 7 log.go:172] (0xc002c9ee70) (0xc002337a40) Stream added, broadcasting: 5 I0411 23:59:24.173777 7 log.go:172] (0xc002c9ee70) Reply frame received for 5 I0411 23:59:24.261324 7 log.go:172] (0xc002c9ee70) Data frame received for 3 I0411 23:59:24.261355 7 log.go:172] (0xc0023379a0) (3) Data frame handling I0411 23:59:24.261381 7 log.go:172] (0xc0023379a0) (3) Data frame sent I0411 23:59:24.261636 7 log.go:172] (0xc002c9ee70) Data frame received for 5 I0411 23:59:24.261669 7 log.go:172] (0xc002337a40) (5) Data frame handling I0411 23:59:24.261702 7 log.go:172] (0xc002c9ee70) Data frame received for 3 I0411 23:59:24.261718 7 log.go:172] (0xc0023379a0) (3) Data frame handling I0411 23:59:24.264045 7 log.go:172] (0xc002c9ee70) Data frame received for 1 I0411 23:59:24.264070 7 log.go:172] (0xc002337900) (1) Data frame handling I0411 23:59:24.264090 7 log.go:172] (0xc002337900) (1) Data frame sent I0411 23:59:24.264107 7 log.go:172] (0xc002c9ee70) (0xc002337900) Stream removed, broadcasting: 1 I0411 23:59:24.264160 7 log.go:172] (0xc002c9ee70) Go away received I0411 23:59:24.264506 7 log.go:172] (0xc002c9ee70) (0xc002337900) Stream removed, broadcasting: 1 I0411 23:59:24.264531 7 log.go:172] (0xc002c9ee70) (0xc0023379a0) Stream removed, broadcasting: 3 I0411 23:59:24.264550 7 log.go:172] (0xc002c9ee70) (0xc002337a40) Stream removed, broadcasting: 5 Apr 11 23:59:24.264: INFO: Waiting for responses: map[] Apr 11 23:59:24.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.17:8080/dial?request=hostname&protocol=http&host=10.244.1.41&port=8080&tries=1'] Namespace:pod-network-test-9399 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 23:59:24.282: INFO: >>> kubeConfig: /root/.kube/config I0411 23:59:24.324251 7 log.go:172] (0xc00274ec60) (0xc00290adc0) Create stream I0411 23:59:24.324292 7 log.go:172] (0xc00274ec60) (0xc00290adc0) Stream added, broadcasting: 1 I0411 23:59:24.326121 7 log.go:172] (0xc00274ec60) Reply frame received for 1 I0411 23:59:24.326155 7 log.go:172] (0xc00274ec60) (0xc002337ae0) Create stream I0411 23:59:24.326164 7 log.go:172] (0xc00274ec60) (0xc002337ae0) Stream added, broadcasting: 3 I0411 23:59:24.326898 7 log.go:172] (0xc00274ec60) Reply frame received for 3 I0411 23:59:24.326933 7 log.go:172] (0xc00274ec60) (0xc00223b400) Create stream I0411 23:59:24.326946 7 log.go:172] (0xc00274ec60) (0xc00223b400) Stream added, broadcasting: 5 I0411 23:59:24.327676 7 log.go:172] (0xc00274ec60) Reply frame received for 5 I0411 23:59:24.398421 7 log.go:172] (0xc00274ec60) Data frame received for 3 I0411 23:59:24.398466 7 log.go:172] (0xc002337ae0) (3) Data frame handling I0411 23:59:24.398520 7 log.go:172] (0xc002337ae0) (3) Data frame sent I0411 23:59:24.398957 7 log.go:172] (0xc00274ec60) Data frame received for 5 I0411 23:59:24.398983 7 log.go:172] (0xc00223b400) (5) Data frame handling I0411 23:59:24.399023 7 log.go:172] (0xc00274ec60) Data frame received for 3 I0411 23:59:24.399045 7 log.go:172] (0xc002337ae0) (3) Data frame handling I0411 23:59:24.400560 7 log.go:172] (0xc00274ec60) Data frame received for 1 I0411 23:59:24.400584 7 log.go:172] (0xc00290adc0) (1) Data frame handling I0411 23:59:24.400599 7 log.go:172] (0xc00290adc0) (1) Data frame sent I0411 23:59:24.400620 7 log.go:172] (0xc00274ec60) (0xc00290adc0) Stream removed, broadcasting: 1 I0411 23:59:24.400644 7 log.go:172] (0xc00274ec60) Go away received I0411 23:59:24.400718 7 log.go:172] (0xc00274ec60) (0xc00290adc0) Stream removed, broadcasting: 1 I0411 23:59:24.400741 7 log.go:172] (0xc00274ec60) (0xc002337ae0) Stream removed, broadcasting: 3 I0411 23:59:24.400750 7 log.go:172] (0xc00274ec60) (0xc00223b400) Stream removed, broadcasting: 5 Apr 11 23:59:24.400: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:59:24.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9399" for this suite. • [SLOW TEST:28.423 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1178,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:59:24.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-0ef35fe5-16f2-4627-a106-3ee835604578 STEP: Creating a pod to test consume configMaps Apr 11 23:59:24.524: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7" in namespace "configmap-556" to be "Succeeded or Failed" Apr 11 23:59:24.532: INFO: Pod "pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146645ms Apr 11 23:59:26.536: INFO: Pod "pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01169514s Apr 11 23:59:28.541: INFO: Pod "pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016392588s STEP: Saw pod success Apr 11 23:59:28.541: INFO: Pod "pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7" satisfied condition "Succeeded or Failed" Apr 11 23:59:28.545: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7 container configmap-volume-test: STEP: delete the pod Apr 11 23:59:28.562: INFO: Waiting for pod pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7 to disappear Apr 11 23:59:28.567: INFO: Pod pod-configmaps-ce59b8eb-44c8-4cd5-a4f0-3e55f3bb22d7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:59:28.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-556" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:59:28.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-3919 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3919 STEP: Deleting pre-stop pod Apr 11 23:59:43.693: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 11 23:59:43.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3919" for this suite. • [SLOW TEST:15.143 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":74,"skipped":1206,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 11 23:59:43.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9140 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 11 23:59:43.816: INFO: Found 0 stateful pods, waiting for 3 Apr 11 23:59:53.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 23:59:53.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 23:59:53.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 12 00:00:03.821: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:00:03.821: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:00:03.821: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 12 00:00:03.848: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 12 00:00:13.882: INFO: Updating stateful set ss2 Apr 12 00:00:13.893: INFO: Waiting for Pod statefulset-9140/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 12 00:00:24.030: INFO: Found 2 stateful pods, waiting for 3 Apr 12 00:00:34.036: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:00:34.036: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:00:34.036: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 12 00:00:34.059: INFO: Updating stateful set ss2 Apr 12 00:00:34.078: INFO: Waiting for Pod statefulset-9140/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:00:44.104: INFO: Updating stateful set ss2 Apr 12 00:00:44.115: INFO: Waiting for StatefulSet statefulset-9140/ss2 to complete update Apr 12 00:00:44.115: INFO: Waiting for Pod statefulset-9140/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:00:54.121: INFO: Waiting for StatefulSet statefulset-9140/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 12 00:01:04.123: INFO: Deleting all statefulset in ns statefulset-9140 Apr 12 00:01:04.126: INFO: Scaling statefulset ss2 to 0 Apr 12 00:01:34.143: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:01:34.146: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:01:34.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9140" for this suite. • [SLOW TEST:110.463 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":75,"skipped":1208,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:01:34.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:01:34.216: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 12 00:01:37.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5261 create -f -' Apr 12 00:01:37.618: INFO: stderr: "" Apr 12 00:01:37.618: INFO: stdout: "e2e-test-crd-publish-openapi-3787-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 12 00:01:37.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5261 delete e2e-test-crd-publish-openapi-3787-crds test-cr' Apr 12 00:01:37.719: INFO: stderr: "" Apr 12 00:01:37.719: INFO: stdout: "e2e-test-crd-publish-openapi-3787-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 12 00:01:37.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5261 apply -f -' Apr 12 00:01:38.005: INFO: stderr: "" Apr 12 00:01:38.005: INFO: stdout: "e2e-test-crd-publish-openapi-3787-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 12 00:01:38.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5261 delete e2e-test-crd-publish-openapi-3787-crds test-cr' Apr 12 00:01:38.113: INFO: stderr: "" Apr 12 00:01:38.113: INFO: stdout: "e2e-test-crd-publish-openapi-3787-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 12 00:01:38.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3787-crds' Apr 12 00:01:38.342: INFO: stderr: "" Apr 12 00:01:38.342: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3787-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:01:41.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5261" for this suite. • [SLOW TEST:7.078 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":76,"skipped":1209,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:01:41.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7476 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7476 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7476 Apr 12 00:01:41.325: INFO: Found 0 stateful pods, waiting for 1 Apr 12 00:01:51.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 12 00:01:51.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:01:51.569: INFO: stderr: "I0412 00:01:51.454216 366 log.go:172] (0xc00003a4d0) (0xc0009f8000) Create stream\nI0412 00:01:51.454277 366 log.go:172] (0xc00003a4d0) (0xc0009f8000) Stream added, broadcasting: 1\nI0412 00:01:51.456081 366 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0412 00:01:51.456109 366 log.go:172] (0xc00003a4d0) (0xc0009f80a0) Create stream\nI0412 00:01:51.456124 366 log.go:172] (0xc00003a4d0) (0xc0009f80a0) Stream added, broadcasting: 3\nI0412 00:01:51.456948 366 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0412 00:01:51.456994 366 log.go:172] (0xc00003a4d0) (0xc0009f8280) Create stream\nI0412 00:01:51.457009 366 log.go:172] (0xc00003a4d0) (0xc0009f8280) Stream added, broadcasting: 5\nI0412 00:01:51.457805 366 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0412 00:01:51.540203 366 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0412 00:01:51.540229 366 log.go:172] (0xc0009f8280) (5) Data frame handling\nI0412 00:01:51.540252 366 log.go:172] (0xc0009f8280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:01:51.560793 366 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0412 00:01:51.560825 366 log.go:172] (0xc0009f80a0) (3) Data frame handling\nI0412 00:01:51.560864 366 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0412 00:01:51.560894 366 log.go:172] (0xc0009f8280) (5) Data frame handling\nI0412 00:01:51.560928 366 log.go:172] (0xc0009f80a0) (3) Data frame sent\nI0412 00:01:51.560953 366 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0412 00:01:51.560961 366 log.go:172] (0xc0009f80a0) (3) Data frame handling\nI0412 00:01:51.562922 366 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0412 00:01:51.562957 366 log.go:172] (0xc0009f8000) (1) Data frame handling\nI0412 00:01:51.562986 366 log.go:172] (0xc0009f8000) (1) Data frame sent\nI0412 00:01:51.563016 366 log.go:172] (0xc00003a4d0) (0xc0009f8000) Stream removed, broadcasting: 1\nI0412 00:01:51.563704 366 log.go:172] (0xc00003a4d0) Go away received\nI0412 00:01:51.564231 366 log.go:172] (0xc00003a4d0) (0xc0009f8000) Stream removed, broadcasting: 1\nI0412 00:01:51.564261 366 log.go:172] (0xc00003a4d0) (0xc0009f80a0) Stream removed, broadcasting: 3\nI0412 00:01:51.564296 366 log.go:172] (0xc00003a4d0) (0xc0009f8280) Stream removed, broadcasting: 5\n" Apr 12 00:01:51.569: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:01:51.569: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:01:51.573: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 12 00:02:01.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:02:01.577: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:02:01.590: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999399s Apr 12 00:02:02.595: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996323023s Apr 12 00:02:03.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992027414s Apr 12 00:02:04.603: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98819641s Apr 12 00:02:05.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983701658s Apr 12 00:02:06.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979380508s Apr 12 00:02:07.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974864293s Apr 12 00:02:08.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.970356724s Apr 12 00:02:09.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96552745s Apr 12 00:02:10.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 960.979916ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7476 Apr 12 00:02:11.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:02:11.888: INFO: stderr: "I0412 00:02:11.784757 387 log.go:172] (0xc0009f20b0) (0xc0004a0b40) Create stream\nI0412 00:02:11.784805 387 log.go:172] (0xc0009f20b0) (0xc0004a0b40) Stream added, broadcasting: 1\nI0412 00:02:11.787479 387 log.go:172] (0xc0009f20b0) Reply frame received for 1\nI0412 00:02:11.787526 387 log.go:172] (0xc0009f20b0) (0xc0009a6000) Create stream\nI0412 00:02:11.787539 387 log.go:172] (0xc0009f20b0) (0xc0009a6000) Stream added, broadcasting: 3\nI0412 00:02:11.788520 387 log.go:172] (0xc0009f20b0) Reply frame received for 3\nI0412 00:02:11.788546 387 log.go:172] (0xc0009f20b0) (0xc0009a60a0) Create stream\nI0412 00:02:11.788555 387 log.go:172] (0xc0009f20b0) (0xc0009a60a0) Stream added, broadcasting: 5\nI0412 00:02:11.789795 387 log.go:172] (0xc0009f20b0) Reply frame received for 5\nI0412 00:02:11.881982 387 log.go:172] (0xc0009f20b0) Data frame received for 3\nI0412 00:02:11.882033 387 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0412 00:02:11.882060 387 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0412 00:02:11.882080 387 log.go:172] (0xc0009f20b0) Data frame received for 3\nI0412 00:02:11.882096 387 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0412 00:02:11.882132 387 log.go:172] (0xc0009f20b0) Data frame received for 5\nI0412 00:02:11.882169 387 log.go:172] (0xc0009a60a0) (5) Data frame handling\nI0412 00:02:11.882193 387 log.go:172] (0xc0009a60a0) (5) Data frame sent\nI0412 00:02:11.882205 387 log.go:172] (0xc0009f20b0) Data frame received for 5\nI0412 00:02:11.882218 387 log.go:172] (0xc0009a60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:02:11.883753 387 log.go:172] (0xc0009f20b0) Data frame received for 1\nI0412 00:02:11.883776 387 log.go:172] (0xc0004a0b40) (1) Data frame handling\nI0412 00:02:11.883799 387 log.go:172] (0xc0004a0b40) (1) Data frame sent\nI0412 00:02:11.883959 387 log.go:172] (0xc0009f20b0) (0xc0004a0b40) Stream removed, broadcasting: 1\nI0412 00:02:11.883994 387 log.go:172] (0xc0009f20b0) Go away received\nI0412 00:02:11.884517 387 log.go:172] (0xc0009f20b0) (0xc0004a0b40) Stream removed, broadcasting: 1\nI0412 00:02:11.884536 387 log.go:172] (0xc0009f20b0) (0xc0009a6000) Stream removed, broadcasting: 3\nI0412 00:02:11.884545 387 log.go:172] (0xc0009f20b0) (0xc0009a60a0) Stream removed, broadcasting: 5\n" Apr 12 00:02:11.889: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:02:11.889: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:02:11.892: INFO: Found 1 stateful pods, waiting for 3 Apr 12 00:02:21.897: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:02:21.898: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:02:21.898: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 12 00:02:21.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:02:22.133: INFO: stderr: "I0412 00:02:22.034196 406 log.go:172] (0xc00003a6e0) (0xc0006690e0) Create stream\nI0412 00:02:22.034263 406 log.go:172] (0xc00003a6e0) (0xc0006690e0) Stream added, broadcasting: 1\nI0412 00:02:22.036547 406 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0412 00:02:22.036605 406 log.go:172] (0xc00003a6e0) (0xc0009cc000) Create stream\nI0412 00:02:22.036628 406 log.go:172] (0xc00003a6e0) (0xc0009cc000) Stream added, broadcasting: 3\nI0412 00:02:22.037851 406 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0412 00:02:22.037890 406 log.go:172] (0xc00003a6e0) (0xc000b46000) Create stream\nI0412 00:02:22.037903 406 log.go:172] (0xc00003a6e0) (0xc000b46000) Stream added, broadcasting: 5\nI0412 00:02:22.038975 406 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0412 00:02:22.127378 406 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0412 00:02:22.127423 406 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0412 00:02:22.127438 406 log.go:172] (0xc0009cc000) (3) Data frame sent\nI0412 00:02:22.127447 406 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0412 00:02:22.127458 406 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0412 00:02:22.127499 406 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0412 00:02:22.127523 406 log.go:172] (0xc000b46000) (5) Data frame handling\nI0412 00:02:22.127541 406 log.go:172] (0xc000b46000) (5) Data frame sent\nI0412 00:02:22.127555 406 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0412 00:02:22.127565 406 log.go:172] (0xc000b46000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:02:22.128904 406 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0412 00:02:22.128923 406 log.go:172] (0xc0006690e0) (1) Data frame handling\nI0412 00:02:22.128939 406 log.go:172] (0xc0006690e0) (1) Data frame sent\nI0412 00:02:22.128947 406 log.go:172] (0xc00003a6e0) (0xc0006690e0) Stream removed, broadcasting: 1\nI0412 00:02:22.128956 406 log.go:172] (0xc00003a6e0) Go away received\nI0412 00:02:22.129537 406 log.go:172] (0xc00003a6e0) (0xc0006690e0) Stream removed, broadcasting: 1\nI0412 00:02:22.129573 406 log.go:172] (0xc00003a6e0) (0xc0009cc000) Stream removed, broadcasting: 3\nI0412 00:02:22.129596 406 log.go:172] (0xc00003a6e0) (0xc000b46000) Stream removed, broadcasting: 5\n" Apr 12 00:02:22.134: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:02:22.134: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:02:22.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:02:22.382: INFO: stderr: "I0412 00:02:22.262945 427 log.go:172] (0xc000bde000) (0xc00044cb40) Create stream\nI0412 00:02:22.262993 427 log.go:172] (0xc000bde000) (0xc00044cb40) Stream added, broadcasting: 1\nI0412 00:02:22.264661 427 log.go:172] (0xc000bde000) Reply frame received for 1\nI0412 00:02:22.264695 427 log.go:172] (0xc000bde000) (0xc000b4e0a0) Create stream\nI0412 00:02:22.264704 427 log.go:172] (0xc000bde000) (0xc000b4e0a0) Stream added, broadcasting: 3\nI0412 00:02:22.265478 427 log.go:172] (0xc000bde000) Reply frame received for 3\nI0412 00:02:22.265519 427 log.go:172] (0xc000bde000) (0xc00044cbe0) Create stream\nI0412 00:02:22.265529 427 log.go:172] (0xc000bde000) (0xc00044cbe0) Stream added, broadcasting: 5\nI0412 00:02:22.266184 427 log.go:172] (0xc000bde000) Reply frame received for 5\nI0412 00:02:22.337565 427 log.go:172] (0xc000bde000) Data frame received for 5\nI0412 00:02:22.337605 427 log.go:172] (0xc00044cbe0) (5) Data frame handling\nI0412 00:02:22.337628 427 log.go:172] (0xc00044cbe0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:02:22.375368 427 log.go:172] (0xc000bde000) Data frame received for 3\nI0412 00:02:22.375398 427 log.go:172] (0xc000b4e0a0) (3) Data frame handling\nI0412 00:02:22.375417 427 log.go:172] (0xc000b4e0a0) (3) Data frame sent\nI0412 00:02:22.375429 427 log.go:172] (0xc000bde000) Data frame received for 3\nI0412 00:02:22.375435 427 log.go:172] (0xc000b4e0a0) (3) Data frame handling\nI0412 00:02:22.375502 427 log.go:172] (0xc000bde000) Data frame received for 5\nI0412 00:02:22.375515 427 log.go:172] (0xc00044cbe0) (5) Data frame handling\nI0412 00:02:22.378230 427 log.go:172] (0xc000bde000) Data frame received for 1\nI0412 00:02:22.378257 427 log.go:172] (0xc00044cb40) (1) Data frame handling\nI0412 00:02:22.378287 427 log.go:172] (0xc00044cb40) (1) Data frame sent\nI0412 00:02:22.378305 427 log.go:172] (0xc000bde000) (0xc00044cb40) Stream removed, broadcasting: 1\nI0412 00:02:22.378328 427 log.go:172] (0xc000bde000) Go away received\nI0412 00:02:22.378594 427 log.go:172] (0xc000bde000) (0xc00044cb40) Stream removed, broadcasting: 1\nI0412 00:02:22.378618 427 log.go:172] (0xc000bde000) (0xc000b4e0a0) Stream removed, broadcasting: 3\nI0412 00:02:22.378628 427 log.go:172] (0xc000bde000) (0xc00044cbe0) Stream removed, broadcasting: 5\n" Apr 12 00:02:22.382: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:02:22.382: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:02:22.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:02:22.643: INFO: stderr: "I0412 00:02:22.541682 448 log.go:172] (0xc0000eab00) (0xc0009ec0a0) Create stream\nI0412 00:02:22.541753 448 log.go:172] (0xc0000eab00) (0xc0009ec0a0) Stream added, broadcasting: 1\nI0412 00:02:22.544809 448 log.go:172] (0xc0000eab00) Reply frame received for 1\nI0412 00:02:22.544869 448 log.go:172] (0xc0000eab00) (0xc000827220) Create stream\nI0412 00:02:22.544886 448 log.go:172] (0xc0000eab00) (0xc000827220) Stream added, broadcasting: 3\nI0412 00:02:22.546145 448 log.go:172] (0xc0000eab00) Reply frame received for 3\nI0412 00:02:22.546194 448 log.go:172] (0xc0000eab00) (0xc0009ec140) Create stream\nI0412 00:02:22.546208 448 log.go:172] (0xc0000eab00) (0xc0009ec140) Stream added, broadcasting: 5\nI0412 00:02:22.547388 448 log.go:172] (0xc0000eab00) Reply frame received for 5\nI0412 00:02:22.605773 448 log.go:172] (0xc0000eab00) Data frame received for 5\nI0412 00:02:22.605803 448 log.go:172] (0xc0009ec140) (5) Data frame handling\nI0412 00:02:22.605824 448 log.go:172] (0xc0009ec140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:02:22.637743 448 log.go:172] (0xc0000eab00) Data frame received for 3\nI0412 00:02:22.637762 448 log.go:172] (0xc000827220) (3) Data frame handling\nI0412 00:02:22.637772 448 log.go:172] (0xc000827220) (3) Data frame sent\nI0412 00:02:22.637779 448 log.go:172] (0xc0000eab00) Data frame received for 3\nI0412 00:02:22.637808 448 log.go:172] (0xc000827220) (3) Data frame handling\nI0412 00:02:22.638019 448 log.go:172] (0xc0000eab00) Data frame received for 5\nI0412 00:02:22.638048 448 log.go:172] (0xc0009ec140) (5) Data frame handling\nI0412 00:02:22.640022 448 log.go:172] (0xc0000eab00) Data frame received for 1\nI0412 00:02:22.640038 448 log.go:172] (0xc0009ec0a0) (1) Data frame handling\nI0412 00:02:22.640068 448 log.go:172] (0xc0009ec0a0) (1) Data frame sent\nI0412 00:02:22.640084 448 log.go:172] (0xc0000eab00) (0xc0009ec0a0) Stream removed, broadcasting: 1\nI0412 00:02:22.640132 448 log.go:172] (0xc0000eab00) Go away received\nI0412 00:02:22.640346 448 log.go:172] (0xc0000eab00) (0xc0009ec0a0) Stream removed, broadcasting: 1\nI0412 00:02:22.640359 448 log.go:172] (0xc0000eab00) (0xc000827220) Stream removed, broadcasting: 3\nI0412 00:02:22.640364 448 log.go:172] (0xc0000eab00) (0xc0009ec140) Stream removed, broadcasting: 5\n" Apr 12 00:02:22.643: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:02:22.643: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:02:22.643: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:02:22.647: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 12 00:02:32.655: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:02:32.655: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:02:32.655: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:02:32.668: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999241s Apr 12 00:02:33.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9941252s Apr 12 00:02:34.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988751147s Apr 12 00:02:35.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983158817s Apr 12 00:02:36.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978261378s Apr 12 00:02:37.694: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973359756s Apr 12 00:02:38.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968013595s Apr 12 00:02:39.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.9631758s Apr 12 00:02:40.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957503621s Apr 12 00:02:41.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.846312ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7476 Apr 12 00:02:42.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:02:42.975: INFO: stderr: "I0412 00:02:42.876959 471 log.go:172] (0xc0009e8630) (0xc000544820) Create stream\nI0412 00:02:42.877024 471 log.go:172] (0xc0009e8630) (0xc000544820) Stream added, broadcasting: 1\nI0412 00:02:42.880033 471 log.go:172] (0xc0009e8630) Reply frame received for 1\nI0412 00:02:42.880082 471 log.go:172] (0xc0009e8630) (0xc0006512c0) Create stream\nI0412 00:02:42.880094 471 log.go:172] (0xc0009e8630) (0xc0006512c0) Stream added, broadcasting: 3\nI0412 00:02:42.881067 471 log.go:172] (0xc0009e8630) Reply frame received for 3\nI0412 00:02:42.881108 471 log.go:172] (0xc0009e8630) (0xc000976000) Create stream\nI0412 00:02:42.881242 471 log.go:172] (0xc0009e8630) (0xc000976000) Stream added, broadcasting: 5\nI0412 00:02:42.882159 471 log.go:172] (0xc0009e8630) Reply frame received for 5\nI0412 00:02:42.969530 471 log.go:172] (0xc0009e8630) Data frame received for 3\nI0412 00:02:42.969579 471 log.go:172] (0xc0006512c0) (3) Data frame handling\nI0412 00:02:42.969589 471 log.go:172] (0xc0006512c0) (3) Data frame sent\nI0412 00:02:42.969594 471 log.go:172] (0xc0009e8630) Data frame received for 3\nI0412 00:02:42.969598 471 log.go:172] (0xc0006512c0) (3) Data frame handling\nI0412 00:02:42.969613 471 log.go:172] (0xc0009e8630) Data frame received for 5\nI0412 00:02:42.969628 471 log.go:172] (0xc000976000) (5) Data frame handling\nI0412 00:02:42.969640 471 log.go:172] (0xc000976000) (5) Data frame sent\nI0412 00:02:42.969646 471 log.go:172] (0xc0009e8630) Data frame received for 5\nI0412 00:02:42.969651 471 log.go:172] (0xc000976000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:02:42.971799 471 log.go:172] (0xc0009e8630) Data frame received for 1\nI0412 00:02:42.971818 471 log.go:172] (0xc000544820) (1) Data frame handling\nI0412 00:02:42.971838 471 log.go:172] (0xc000544820) (1) Data frame sent\nI0412 00:02:42.971855 471 log.go:172] (0xc0009e8630) (0xc000544820) Stream removed, broadcasting: 1\nI0412 00:02:42.971878 471 log.go:172] (0xc0009e8630) Go away received\nI0412 00:02:42.972234 471 log.go:172] (0xc0009e8630) (0xc000544820) Stream removed, broadcasting: 1\nI0412 00:02:42.972248 471 log.go:172] (0xc0009e8630) (0xc0006512c0) Stream removed, broadcasting: 3\nI0412 00:02:42.972254 471 log.go:172] (0xc0009e8630) (0xc000976000) Stream removed, broadcasting: 5\n" Apr 12 00:02:42.975: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:02:42.975: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:02:42.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:02:43.173: INFO: stderr: "I0412 00:02:43.096443 493 log.go:172] (0xc000ba9290) (0xc000b9a500) Create stream\nI0412 00:02:43.096500 493 log.go:172] (0xc000ba9290) (0xc000b9a500) Stream added, broadcasting: 1\nI0412 00:02:43.099503 493 log.go:172] (0xc000ba9290) Reply frame received for 1\nI0412 00:02:43.099538 493 log.go:172] (0xc000ba9290) (0xc000641680) Create stream\nI0412 00:02:43.099549 493 log.go:172] (0xc000ba9290) (0xc000641680) Stream added, broadcasting: 3\nI0412 00:02:43.100218 493 log.go:172] (0xc000ba9290) Reply frame received for 3\nI0412 00:02:43.100253 493 log.go:172] (0xc000ba9290) (0xc000476aa0) Create stream\nI0412 00:02:43.100266 493 log.go:172] (0xc000ba9290) (0xc000476aa0) Stream added, broadcasting: 5\nI0412 00:02:43.100837 493 log.go:172] (0xc000ba9290) Reply frame received for 5\nI0412 00:02:43.164689 493 log.go:172] (0xc000ba9290) Data frame received for 5\nI0412 00:02:43.164734 493 log.go:172] (0xc000476aa0) (5) Data frame handling\nI0412 00:02:43.164752 493 log.go:172] (0xc000476aa0) (5) Data frame sent\nI0412 00:02:43.164780 493 log.go:172] (0xc000ba9290) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:02:43.164806 493 log.go:172] (0xc000476aa0) (5) Data frame handling\nI0412 00:02:43.166781 493 log.go:172] (0xc000ba9290) Data frame received for 3\nI0412 00:02:43.166805 493 log.go:172] (0xc000641680) (3) Data frame handling\nI0412 00:02:43.166818 493 log.go:172] (0xc000641680) (3) Data frame sent\nI0412 00:02:43.166828 493 log.go:172] (0xc000ba9290) Data frame received for 3\nI0412 00:02:43.166846 493 log.go:172] (0xc000641680) (3) Data frame handling\nI0412 00:02:43.168182 493 log.go:172] (0xc000ba9290) Data frame received for 1\nI0412 00:02:43.168202 493 log.go:172] (0xc000b9a500) (1) Data frame handling\nI0412 00:02:43.168215 493 log.go:172] (0xc000b9a500) (1) Data frame sent\nI0412 00:02:43.168231 493 log.go:172] (0xc000ba9290) (0xc000b9a500) Stream removed, broadcasting: 1\nI0412 00:02:43.168245 493 log.go:172] (0xc000ba9290) Go away received\nI0412 00:02:43.168501 493 log.go:172] (0xc000ba9290) (0xc000b9a500) Stream removed, broadcasting: 1\nI0412 00:02:43.168516 493 log.go:172] (0xc000ba9290) (0xc000641680) Stream removed, broadcasting: 3\nI0412 00:02:43.168522 493 log.go:172] (0xc000ba9290) (0xc000476aa0) Stream removed, broadcasting: 5\n" Apr 12 00:02:43.173: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:02:43.173: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:02:43.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7476 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:02:43.416: INFO: stderr: "I0412 00:02:43.329808 514 log.go:172] (0xc0009de000) (0xc0009a2000) Create stream\nI0412 00:02:43.329868 514 log.go:172] (0xc0009de000) (0xc0009a2000) Stream added, broadcasting: 1\nI0412 00:02:43.332648 514 log.go:172] (0xc0009de000) Reply frame received for 1\nI0412 00:02:43.332725 514 log.go:172] (0xc0009de000) (0xc000a7e000) Create stream\nI0412 00:02:43.332761 514 log.go:172] (0xc0009de000) (0xc000a7e000) Stream added, broadcasting: 3\nI0412 00:02:43.334387 514 log.go:172] (0xc0009de000) Reply frame received for 3\nI0412 00:02:43.334430 514 log.go:172] (0xc0009de000) (0xc000a7e0a0) Create stream\nI0412 00:02:43.334443 514 log.go:172] (0xc0009de000) (0xc000a7e0a0) Stream added, broadcasting: 5\nI0412 00:02:43.335661 514 log.go:172] (0xc0009de000) Reply frame received for 5\nI0412 00:02:43.410097 514 log.go:172] (0xc0009de000) Data frame received for 3\nI0412 00:02:43.410123 514 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0412 00:02:43.410131 514 log.go:172] (0xc000a7e000) (3) Data frame sent\nI0412 00:02:43.410171 514 log.go:172] (0xc0009de000) Data frame received for 5\nI0412 00:02:43.410224 514 log.go:172] (0xc000a7e0a0) (5) Data frame handling\nI0412 00:02:43.410249 514 log.go:172] (0xc000a7e0a0) (5) Data frame sent\nI0412 00:02:43.410282 514 log.go:172] (0xc0009de000) Data frame received for 5\nI0412 00:02:43.410292 514 log.go:172] (0xc000a7e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:02:43.410308 514 log.go:172] (0xc0009de000) Data frame received for 3\nI0412 00:02:43.410320 514 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0412 00:02:43.411613 514 log.go:172] (0xc0009de000) Data frame received for 1\nI0412 00:02:43.411645 514 log.go:172] (0xc0009a2000) (1) Data frame handling\nI0412 00:02:43.411677 514 log.go:172] (0xc0009a2000) (1) Data frame sent\nI0412 00:02:43.411739 514 log.go:172] (0xc0009de000) (0xc0009a2000) Stream removed, broadcasting: 1\nI0412 00:02:43.411970 514 log.go:172] (0xc0009de000) Go away received\nI0412 00:02:43.412067 514 log.go:172] (0xc0009de000) (0xc0009a2000) Stream removed, broadcasting: 1\nI0412 00:02:43.412108 514 log.go:172] (0xc0009de000) (0xc000a7e000) Stream removed, broadcasting: 3\nI0412 00:02:43.412127 514 log.go:172] (0xc0009de000) (0xc000a7e0a0) Stream removed, broadcasting: 5\n" Apr 12 00:02:43.416: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:02:43.416: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:02:43.416: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 12 00:03:13.431: INFO: Deleting all statefulset in ns statefulset-7476 Apr 12 00:03:13.435: INFO: Scaling statefulset ss to 0 Apr 12 00:03:13.444: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:03:13.447: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:03:13.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7476" for this suite. • [SLOW TEST:92.229 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":77,"skipped":1216,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:03:13.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 12 00:03:13.546: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:03:27.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8822" for this suite. • [SLOW TEST:14.363 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":78,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:03:27.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:03:28.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8451" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":79,"skipped":1268,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:03:28.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:03:28.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8" in namespace "projected-9105" to be "Succeeded or Failed" Apr 12 00:03:28.129: INFO: Pod "downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349787ms Apr 12 00:03:30.155: INFO: Pod "downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034732253s Apr 12 00:03:32.159: INFO: Pod "downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038619891s STEP: Saw pod success Apr 12 00:03:32.159: INFO: Pod "downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8" satisfied condition "Succeeded or Failed" Apr 12 00:03:32.162: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8 container client-container: STEP: delete the pod Apr 12 00:03:32.204: INFO: Waiting for pod downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8 to disappear Apr 12 00:03:32.213: INFO: Pod downwardapi-volume-df4a2421-0129-4cdb-bac9-afd11d36cbe8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:03:32.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9105" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1268,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:03:32.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:03:36.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2897" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1284,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:03:36.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-kgzn STEP: Creating a pod to test atomic-volume-subpath Apr 12 00:03:36.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kgzn" in namespace "subpath-4262" to be "Succeeded or Failed" Apr 12 00:03:36.405: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059721ms Apr 12 00:03:38.424: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026550865s Apr 12 00:03:40.428: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 4.031060635s Apr 12 00:03:42.432: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 6.035222197s Apr 12 00:03:44.436: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 8.038945772s Apr 12 00:03:46.440: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 10.043152316s Apr 12 00:03:48.444: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 12.046568666s Apr 12 00:03:50.448: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 14.050975096s Apr 12 00:03:52.452: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 16.054887548s Apr 12 00:03:54.456: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 18.058799318s Apr 12 00:03:56.460: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 20.062576197s Apr 12 00:03:58.463: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Running", Reason="", readiness=true. Elapsed: 22.065959605s Apr 12 00:04:00.473: INFO: Pod "pod-subpath-test-configmap-kgzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075436634s STEP: Saw pod success Apr 12 00:04:00.473: INFO: Pod "pod-subpath-test-configmap-kgzn" satisfied condition "Succeeded or Failed" Apr 12 00:04:00.475: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-kgzn container test-container-subpath-configmap-kgzn: STEP: delete the pod Apr 12 00:04:00.516: INFO: Waiting for pod pod-subpath-test-configmap-kgzn to disappear Apr 12 00:04:00.544: INFO: Pod pod-subpath-test-configmap-kgzn no longer exists STEP: Deleting pod pod-subpath-test-configmap-kgzn Apr 12 00:04:00.544: INFO: Deleting pod "pod-subpath-test-configmap-kgzn" in namespace "subpath-4262" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:00.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4262" for this suite. • [SLOW TEST:24.283 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":82,"skipped":1298,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:00.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:04:00.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57" in namespace "projected-6674" to be "Succeeded or Failed" Apr 12 00:04:00.669: INFO: Pod "downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261583ms Apr 12 00:04:02.687: INFO: Pod "downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021510685s Apr 12 00:04:04.691: INFO: Pod "downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025762783s STEP: Saw pod success Apr 12 00:04:04.691: INFO: Pod "downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57" satisfied condition "Succeeded or Failed" Apr 12 00:04:04.694: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57 container client-container: STEP: delete the pod Apr 12 00:04:04.743: INFO: Waiting for pod downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57 to disappear Apr 12 00:04:04.747: INFO: Pod downwardapi-volume-e5d63c2a-6c5d-4013-88df-233bb0666b57 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:04.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6674" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1306,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:04.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 12 00:04:09.361: INFO: Successfully updated pod "annotationupdatea7f9e506-62a2-498d-a00c-ab5e7d4d77d7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:11.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1417" for this suite. • [SLOW TEST:6.636 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:11.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 12 00:04:11.451: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 12 00:04:11.462: INFO: Waiting for terminating namespaces to be deleted... Apr 12 00:04:11.464: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 12 00:04:11.469: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:04:11.469: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:04:11.469: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:04:11.469: INFO: Container kube-proxy ready: true, restart count 0 Apr 12 00:04:11.469: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 12 00:04:11.474: INFO: annotationupdatea7f9e506-62a2-498d-a00c-ab5e7d4d77d7 from downward-api-1417 started at 2020-04-12 00:04:04 +0000 UTC (1 container statuses recorded) Apr 12 00:04:11.474: INFO: Container client-container ready: true, restart count 0 Apr 12 00:04:11.474: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:04:11.474: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:04:11.474: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:04:11.474: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-693515f2-d214-4d0d-bdba-81106bb5469b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-693515f2-d214-4d0d-bdba-81106bb5469b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-693515f2-d214-4d0d-bdba-81106bb5469b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:27.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7845" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.291 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":85,"skipped":1337,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:27.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 12 00:04:31.762: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:31.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2536" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1352,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:31.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-cb1256fe-3fc0-453e-b944-60ba2df53ab8 STEP: Creating a pod to test consume configMaps Apr 12 00:04:31.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39" in namespace "configmap-1421" to be "Succeeded or Failed" Apr 12 00:04:31.922: INFO: Pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39": Phase="Pending", Reason="", readiness=false. Elapsed: 11.554622ms Apr 12 00:04:33.929: INFO: Pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018659492s Apr 12 00:04:35.969: INFO: Pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059082116s Apr 12 00:04:37.973: INFO: Pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063125424s STEP: Saw pod success Apr 12 00:04:37.973: INFO: Pod "pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39" satisfied condition "Succeeded or Failed" Apr 12 00:04:37.976: INFO: Trying to get logs from node latest-worker pod pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39 container configmap-volume-test: STEP: delete the pod Apr 12 00:04:38.032: INFO: Waiting for pod pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39 to disappear Apr 12 00:04:38.042: INFO: Pod pod-configmaps-844077f6-ad62-4a61-81a0-c032035eba39 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:38.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1421" for this suite. • [SLOW TEST:6.265 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1352,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:38.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-50dd7230-d396-4576-8364-ba46e853345a STEP: Creating a pod to test consume secrets Apr 12 00:04:38.159: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62" in namespace "projected-167" to be "Succeeded or Failed" Apr 12 00:04:38.162: INFO: Pod "pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488761ms Apr 12 00:04:40.166: INFO: Pod "pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007698863s Apr 12 00:04:42.171: INFO: Pod "pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012139961s STEP: Saw pod success Apr 12 00:04:42.171: INFO: Pod "pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62" satisfied condition "Succeeded or Failed" Apr 12 00:04:42.174: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62 container projected-secret-volume-test: STEP: delete the pod Apr 12 00:04:42.205: INFO: Waiting for pod pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62 to disappear Apr 12 00:04:42.216: INFO: Pod pod-projected-secrets-9ecdb39a-ae5e-48db-859e-365df3253b62 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:04:42.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-167" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1357,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:04:42.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-88787d90-207e-424f-96fb-986f0a696a80 STEP: Creating secret with name s-test-opt-upd-e8953372-9efe-4ee4-8234-d353be0bbb0b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-88787d90-207e-424f-96fb-986f0a696a80 STEP: Updating secret s-test-opt-upd-e8953372-9efe-4ee4-8234-d353be0bbb0b STEP: Creating secret with name s-test-opt-create-a659738b-baae-4b7f-8c44-368bc60cea6e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:02.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3323" for this suite. • [SLOW TEST:80.570 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:02.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:06:02.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2447' Apr 12 00:06:03.173: INFO: stderr: "" Apr 12 00:06:03.173: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 12 00:06:03.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2447' Apr 12 00:06:03.443: INFO: stderr: "" Apr 12 00:06:03.443: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 12 00:06:04.448: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:06:04.448: INFO: Found 0 / 1 Apr 12 00:06:05.448: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:06:05.448: INFO: Found 0 / 1 Apr 12 00:06:06.492: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:06:06.493: INFO: Found 1 / 1 Apr 12 00:06:06.493: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 12 00:06:06.500: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:06:06.500: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 12 00:06:06.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-txbsh --namespace=kubectl-2447' Apr 12 00:06:06.628: INFO: stderr: "" Apr 12 00:06:06.628: INFO: stdout: "Name: agnhost-master-txbsh\nNamespace: kubectl-2447\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Sun, 12 Apr 2020 00:06:03 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.56\nIPs:\n IP: 10.244.1.56\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9838382d524c417aa5df7574ddbe89697a727a7bb571c7d655df4e75abed6edb\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 12 Apr 2020 00:06:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s66mp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-s66mp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s66mp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2447/agnhost-master-txbsh to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Apr 12 00:06:06.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2447' Apr 12 00:06:06.742: INFO: stderr: "" Apr 12 00:06:06.742: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2447\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-txbsh\n" Apr 12 00:06:06.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2447' Apr 12 00:06:06.844: INFO: stderr: "" Apr 12 00:06:06.844: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2447\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.105.141\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.56:6379\nSession Affinity: None\nEvents: \n" Apr 12 00:06:06.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 12 00:06:06.978: INFO: stderr: "" Apr 12 00:06:06.978: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 12 Apr 2020 00:06:00 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 12 Apr 2020 00:05:16 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 12 Apr 2020 00:05:16 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 12 Apr 2020 00:05:16 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 12 Apr 2020 00:05:16 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 27d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 12 00:06:06.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-2447' Apr 12 00:06:07.105: INFO: stderr: "" Apr 12 00:06:07.105: INFO: stdout: "Name: kubectl-2447\nLabels: e2e-framework=kubectl\n e2e-run=4138d95c-a78c-41f0-8ba2-2b0ef16101f3\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:07.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2447" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":90,"skipped":1406,"failed":0} ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:07.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-31 STEP: creating replication controller nodeport-test in namespace services-31 I0412 00:06:07.272738 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-31, replica count: 2 I0412 00:06:10.323300 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:06:13.323591 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 12 00:06:13.323: INFO: Creating new exec pod Apr 12 00:06:18.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-31 execpodvmzds -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 12 00:06:18.589: INFO: stderr: "I0412 00:06:18.481795 682 log.go:172] (0xc0009a8d10) (0xc00092a500) Create stream\nI0412 00:06:18.481844 682 log.go:172] (0xc0009a8d10) (0xc00092a500) Stream added, broadcasting: 1\nI0412 00:06:18.484399 682 log.go:172] (0xc0009a8d10) Reply frame received for 1\nI0412 00:06:18.484443 682 log.go:172] (0xc0009a8d10) (0xc0008d2000) Create stream\nI0412 00:06:18.484455 682 log.go:172] (0xc0009a8d10) (0xc0008d2000) Stream added, broadcasting: 3\nI0412 00:06:18.485693 682 log.go:172] (0xc0009a8d10) Reply frame received for 3\nI0412 00:06:18.485723 682 log.go:172] (0xc0009a8d10) (0xc00098a140) Create stream\nI0412 00:06:18.485735 682 log.go:172] (0xc0009a8d10) (0xc00098a140) Stream added, broadcasting: 5\nI0412 00:06:18.486818 682 log.go:172] (0xc0009a8d10) Reply frame received for 5\nI0412 00:06:18.583198 682 log.go:172] (0xc0009a8d10) Data frame received for 3\nI0412 00:06:18.583248 682 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0412 00:06:18.583280 682 log.go:172] (0xc0009a8d10) Data frame received for 5\nI0412 00:06:18.583295 682 log.go:172] (0xc00098a140) (5) Data frame handling\nI0412 00:06:18.583312 682 log.go:172] (0xc00098a140) (5) Data frame sent\nI0412 00:06:18.583329 682 log.go:172] (0xc0009a8d10) Data frame received for 5\nI0412 00:06:18.583345 682 log.go:172] (0xc00098a140) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0412 00:06:18.585362 682 log.go:172] (0xc0009a8d10) Data frame received for 1\nI0412 00:06:18.585404 682 log.go:172] (0xc00092a500) (1) Data frame handling\nI0412 00:06:18.585445 682 log.go:172] (0xc00092a500) (1) Data frame sent\nI0412 00:06:18.585473 682 log.go:172] (0xc0009a8d10) (0xc00092a500) Stream removed, broadcasting: 1\nI0412 00:06:18.585514 682 log.go:172] (0xc0009a8d10) Go away received\nI0412 00:06:18.586025 682 log.go:172] (0xc0009a8d10) (0xc00092a500) Stream removed, broadcasting: 1\nI0412 00:06:18.586056 682 log.go:172] (0xc0009a8d10) (0xc0008d2000) Stream removed, broadcasting: 3\nI0412 00:06:18.586070 682 log.go:172] (0xc0009a8d10) (0xc00098a140) Stream removed, broadcasting: 5\n" Apr 12 00:06:18.590: INFO: stdout: "" Apr 12 00:06:18.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-31 execpodvmzds -- /bin/sh -x -c nc -zv -t -w 2 10.96.96.93 80' Apr 12 00:06:18.794: INFO: stderr: "I0412 00:06:18.718135 701 log.go:172] (0xc0009d0160) (0xc000544c80) Create stream\nI0412 00:06:18.718202 701 log.go:172] (0xc0009d0160) (0xc000544c80) Stream added, broadcasting: 1\nI0412 00:06:18.732720 701 log.go:172] (0xc0009d0160) Reply frame received for 1\nI0412 00:06:18.732765 701 log.go:172] (0xc0009d0160) (0xc0007f9400) Create stream\nI0412 00:06:18.732772 701 log.go:172] (0xc0009d0160) (0xc0007f9400) Stream added, broadcasting: 3\nI0412 00:06:18.734343 701 log.go:172] (0xc0009d0160) Reply frame received for 3\nI0412 00:06:18.734368 701 log.go:172] (0xc0009d0160) (0xc0005a81e0) Create stream\nI0412 00:06:18.734375 701 log.go:172] (0xc0009d0160) (0xc0005a81e0) Stream added, broadcasting: 5\nI0412 00:06:18.735233 701 log.go:172] (0xc0009d0160) Reply frame received for 5\nI0412 00:06:18.787643 701 log.go:172] (0xc0009d0160) Data frame received for 5\nI0412 00:06:18.787692 701 log.go:172] (0xc0005a81e0) (5) Data frame handling\nI0412 00:06:18.787718 701 log.go:172] (0xc0005a81e0) (5) Data frame sent\nI0412 00:06:18.787733 701 log.go:172] (0xc0009d0160) Data frame received for 5\nI0412 00:06:18.787744 701 log.go:172] (0xc0005a81e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.96.93 80\nConnection to 10.96.96.93 80 port [tcp/http] succeeded!\nI0412 00:06:18.787773 701 log.go:172] (0xc0009d0160) Data frame received for 3\nI0412 00:06:18.787790 701 log.go:172] (0xc0007f9400) (3) Data frame handling\nI0412 00:06:18.789679 701 log.go:172] (0xc0009d0160) Data frame received for 1\nI0412 00:06:18.789702 701 log.go:172] (0xc000544c80) (1) Data frame handling\nI0412 00:06:18.789713 701 log.go:172] (0xc000544c80) (1) Data frame sent\nI0412 00:06:18.789724 701 log.go:172] (0xc0009d0160) (0xc000544c80) Stream removed, broadcasting: 1\nI0412 00:06:18.789737 701 log.go:172] (0xc0009d0160) Go away received\nI0412 00:06:18.790090 701 log.go:172] (0xc0009d0160) (0xc000544c80) Stream removed, broadcasting: 1\nI0412 00:06:18.790107 701 log.go:172] (0xc0009d0160) (0xc0007f9400) Stream removed, broadcasting: 3\nI0412 00:06:18.790114 701 log.go:172] (0xc0009d0160) (0xc0005a81e0) Stream removed, broadcasting: 5\n" Apr 12 00:06:18.794: INFO: stdout: "" Apr 12 00:06:18.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-31 execpodvmzds -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31297' Apr 12 00:06:19.012: INFO: stderr: "I0412 00:06:18.931005 723 log.go:172] (0xc0007e09a0) (0xc0007da140) Create stream\nI0412 00:06:18.931066 723 log.go:172] (0xc0007e09a0) (0xc0007da140) Stream added, broadcasting: 1\nI0412 00:06:18.934151 723 log.go:172] (0xc0007e09a0) Reply frame received for 1\nI0412 00:06:18.934199 723 log.go:172] (0xc0007e09a0) (0xc000808000) Create stream\nI0412 00:06:18.934217 723 log.go:172] (0xc0007e09a0) (0xc000808000) Stream added, broadcasting: 3\nI0412 00:06:18.935557 723 log.go:172] (0xc0007e09a0) Reply frame received for 3\nI0412 00:06:18.935605 723 log.go:172] (0xc0007e09a0) (0xc0007da1e0) Create stream\nI0412 00:06:18.935619 723 log.go:172] (0xc0007e09a0) (0xc0007da1e0) Stream added, broadcasting: 5\nI0412 00:06:18.936742 723 log.go:172] (0xc0007e09a0) Reply frame received for 5\nI0412 00:06:19.005769 723 log.go:172] (0xc0007e09a0) Data frame received for 3\nI0412 00:06:19.005822 723 log.go:172] (0xc000808000) (3) Data frame handling\nI0412 00:06:19.005860 723 log.go:172] (0xc0007e09a0) Data frame received for 5\nI0412 00:06:19.005878 723 log.go:172] (0xc0007da1e0) (5) Data frame handling\nI0412 00:06:19.005905 723 log.go:172] (0xc0007da1e0) (5) Data frame sent\nI0412 00:06:19.005921 723 log.go:172] (0xc0007e09a0) Data frame received for 5\nI0412 00:06:19.005937 723 log.go:172] (0xc0007da1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31297\nConnection to 172.17.0.13 31297 port [tcp/31297] succeeded!\nI0412 00:06:19.007774 723 log.go:172] (0xc0007e09a0) Data frame received for 1\nI0412 00:06:19.007819 723 log.go:172] (0xc0007da140) (1) Data frame handling\nI0412 00:06:19.007844 723 log.go:172] (0xc0007da140) (1) Data frame sent\nI0412 00:06:19.007866 723 log.go:172] (0xc0007e09a0) (0xc0007da140) Stream removed, broadcasting: 1\nI0412 00:06:19.007901 723 log.go:172] (0xc0007e09a0) Go away received\nI0412 00:06:19.008286 723 log.go:172] (0xc0007e09a0) (0xc0007da140) Stream removed, broadcasting: 1\nI0412 00:06:19.008305 723 log.go:172] (0xc0007e09a0) (0xc000808000) Stream removed, broadcasting: 3\nI0412 00:06:19.008313 723 log.go:172] (0xc0007e09a0) (0xc0007da1e0) Stream removed, broadcasting: 5\n" Apr 12 00:06:19.012: INFO: stdout: "" Apr 12 00:06:19.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-31 execpodvmzds -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31297' Apr 12 00:06:19.232: INFO: stderr: "I0412 00:06:19.148721 744 log.go:172] (0xc00098a6e0) (0xc00096a140) Create stream\nI0412 00:06:19.148776 744 log.go:172] (0xc00098a6e0) (0xc00096a140) Stream added, broadcasting: 1\nI0412 00:06:19.151696 744 log.go:172] (0xc00098a6e0) Reply frame received for 1\nI0412 00:06:19.151729 744 log.go:172] (0xc00098a6e0) (0xc0008e0000) Create stream\nI0412 00:06:19.151739 744 log.go:172] (0xc00098a6e0) (0xc0008e0000) Stream added, broadcasting: 3\nI0412 00:06:19.152723 744 log.go:172] (0xc00098a6e0) Reply frame received for 3\nI0412 00:06:19.152751 744 log.go:172] (0xc00098a6e0) (0xc00096a1e0) Create stream\nI0412 00:06:19.152765 744 log.go:172] (0xc00098a6e0) (0xc00096a1e0) Stream added, broadcasting: 5\nI0412 00:06:19.153963 744 log.go:172] (0xc00098a6e0) Reply frame received for 5\nI0412 00:06:19.228353 744 log.go:172] (0xc00098a6e0) Data frame received for 5\nI0412 00:06:19.228379 744 log.go:172] (0xc00096a1e0) (5) Data frame handling\nI0412 00:06:19.228386 744 log.go:172] (0xc00096a1e0) (5) Data frame sent\nI0412 00:06:19.228391 744 log.go:172] (0xc00098a6e0) Data frame received for 5\nI0412 00:06:19.228395 744 log.go:172] (0xc00096a1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31297\nConnection to 172.17.0.12 31297 port [tcp/31297] succeeded!\nI0412 00:06:19.228412 744 log.go:172] (0xc00098a6e0) Data frame received for 3\nI0412 00:06:19.228420 744 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0412 00:06:19.229503 744 log.go:172] (0xc00098a6e0) Data frame received for 1\nI0412 00:06:19.229526 744 log.go:172] (0xc00096a140) (1) Data frame handling\nI0412 00:06:19.229541 744 log.go:172] (0xc00096a140) (1) Data frame sent\nI0412 00:06:19.229552 744 log.go:172] (0xc00098a6e0) (0xc00096a140) Stream removed, broadcasting: 1\nI0412 00:06:19.229565 744 log.go:172] (0xc00098a6e0) Go away received\nI0412 00:06:19.229822 744 log.go:172] (0xc00098a6e0) (0xc00096a140) Stream removed, broadcasting: 1\nI0412 00:06:19.229835 744 log.go:172] (0xc00098a6e0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0412 00:06:19.229840 744 log.go:172] (0xc00098a6e0) (0xc00096a1e0) Stream removed, broadcasting: 5\n" Apr 12 00:06:19.232: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:19.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-31" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.126 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":91,"skipped":1406,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:19.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 12 00:06:19.309: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 12 00:06:29.814: INFO: >>> kubeConfig: /root/.kube/config Apr 12 00:06:32.727: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:43.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7292" for this suite. • [SLOW TEST:23.950 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":92,"skipped":1416,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:43.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 12 00:06:43.223: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 12 00:06:43.259: INFO: Waiting for terminating namespaces to be deleted... Apr 12 00:06:43.261: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 12 00:06:43.265: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:06:43.265: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:06:43.265: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:06:43.265: INFO: Container kube-proxy ready: true, restart count 0 Apr 12 00:06:43.265: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 12 00:06:43.281: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:06:43.281: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:06:43.281: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:06:43.281: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bec6e2d8-f9f5-4351-8660-3b4a78882d87 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-bec6e2d8-f9f5-4351-8660-3b4a78882d87 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-bec6e2d8-f9f5-4351-8660-3b4a78882d87 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:51.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6283" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":93,"skipped":1423,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:51.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-e84f93be-660a-4028-8b2e-64ec354d9c5b STEP: Creating a pod to test consume configMaps Apr 12 00:06:51.522: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580" in namespace "projected-840" to be "Succeeded or Failed" Apr 12 00:06:51.525: INFO: Pod "pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363989ms Apr 12 00:06:53.529: INFO: Pod "pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006941848s Apr 12 00:06:55.533: INFO: Pod "pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221112s STEP: Saw pod success Apr 12 00:06:55.533: INFO: Pod "pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580" satisfied condition "Succeeded or Failed" Apr 12 00:06:55.536: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580 container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:06:55.583: INFO: Waiting for pod pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580 to disappear Apr 12 00:06:55.591: INFO: Pod pod-projected-configmaps-93b1aa1f-a553-4b28-9d4d-69c52a0d5580 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:06:55.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-840" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1432,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:06:55.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:06:55.652: INFO: Creating ReplicaSet my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb Apr 12 00:06:55.680: INFO: Pod name my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb: Found 0 pods out of 1 Apr 12 00:07:00.684: INFO: Pod name my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb: Found 1 pods out of 1 Apr 12 00:07:00.684: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb" is running Apr 12 00:07:00.690: INFO: Pod "my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb-6q7qt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:06:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:06:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:06:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:06:55 +0000 UTC Reason: Message:}]) Apr 12 00:07:00.690: INFO: Trying to dial the pod Apr 12 00:07:05.702: INFO: Controller my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb: Got expected result from replica 1 [my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb-6q7qt]: "my-hostname-basic-a3d7e108-8239-421c-8d9a-17b34aad6ffb-6q7qt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:05.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3514" for this suite. • [SLOW TEST:10.113 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":95,"skipped":1454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:05.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:07:05.760: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 12 00:07:07.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4951 create -f -' Apr 12 00:07:10.664: INFO: stderr: "" Apr 12 00:07:10.664: INFO: stdout: "e2e-test-crd-publish-openapi-9964-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 12 00:07:10.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4951 delete e2e-test-crd-publish-openapi-9964-crds test-cr' Apr 12 00:07:10.779: INFO: stderr: "" Apr 12 00:07:10.779: INFO: stdout: "e2e-test-crd-publish-openapi-9964-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 12 00:07:10.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4951 apply -f -' Apr 12 00:07:11.057: INFO: stderr: "" Apr 12 00:07:11.057: INFO: stdout: "e2e-test-crd-publish-openapi-9964-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 12 00:07:11.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4951 delete e2e-test-crd-publish-openapi-9964-crds test-cr' Apr 12 00:07:11.152: INFO: stderr: "" Apr 12 00:07:11.152: INFO: stdout: "e2e-test-crd-publish-openapi-9964-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 12 00:07:11.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9964-crds' Apr 12 00:07:11.410: INFO: stderr: "" Apr 12 00:07:11.410: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9964-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:13.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4951" for this suite. • [SLOW TEST:7.623 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":96,"skipped":1492,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:13.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:07:13.431: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5547 I0412 00:07:13.446735 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5547, replica count: 1 I0412 00:07:14.497202 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:07:15.497438 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:07:16.497667 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:07:17.497922 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 12 00:07:17.648: INFO: Created: latency-svc-n7hh6 Apr 12 00:07:17.659: INFO: Got endpoints: latency-svc-n7hh6 [61.310731ms] Apr 12 00:07:17.686: INFO: Created: latency-svc-6lgzp Apr 12 00:07:17.700: INFO: Got endpoints: latency-svc-6lgzp [40.527806ms] Apr 12 00:07:17.719: INFO: Created: latency-svc-6x2vw Apr 12 00:07:17.735: INFO: Got endpoints: latency-svc-6x2vw [76.223289ms] Apr 12 00:07:17.780: INFO: Created: latency-svc-fw8wr Apr 12 00:07:17.803: INFO: Created: latency-svc-rmpl6 Apr 12 00:07:17.803: INFO: Got endpoints: latency-svc-fw8wr [143.814068ms] Apr 12 00:07:17.827: INFO: Got endpoints: latency-svc-rmpl6 [167.938016ms] Apr 12 00:07:17.854: INFO: Created: latency-svc-4f82h Apr 12 00:07:17.865: INFO: Got endpoints: latency-svc-4f82h [205.271276ms] Apr 12 00:07:17.912: INFO: Created: latency-svc-5jtpz Apr 12 00:07:17.938: INFO: Got endpoints: latency-svc-5jtpz [278.509992ms] Apr 12 00:07:17.938: INFO: Created: latency-svc-9dfg5 Apr 12 00:07:17.955: INFO: Got endpoints: latency-svc-9dfg5 [295.040982ms] Apr 12 00:07:17.986: INFO: Created: latency-svc-zlqsv Apr 12 00:07:17.997: INFO: Got endpoints: latency-svc-zlqsv [337.472729ms] Apr 12 00:07:18.037: INFO: Created: latency-svc-ff7vv Apr 12 00:07:18.050: INFO: Got endpoints: latency-svc-ff7vv [391.196271ms] Apr 12 00:07:18.073: INFO: Created: latency-svc-fq666 Apr 12 00:07:18.087: INFO: Got endpoints: latency-svc-fq666 [427.105643ms] Apr 12 00:07:18.109: INFO: Created: latency-svc-88qcb Apr 12 00:07:18.125: INFO: Got endpoints: latency-svc-88qcb [465.334469ms] Apr 12 00:07:18.206: INFO: Created: latency-svc-l7g4t Apr 12 00:07:18.215: INFO: Got endpoints: latency-svc-l7g4t [555.030974ms] Apr 12 00:07:18.255: INFO: Created: latency-svc-ztwzp Apr 12 00:07:18.275: INFO: Got endpoints: latency-svc-ztwzp [615.082431ms] Apr 12 00:07:18.343: INFO: Created: latency-svc-smf8w Apr 12 00:07:18.366: INFO: Created: latency-svc-76grh Apr 12 00:07:18.366: INFO: Got endpoints: latency-svc-smf8w [706.450184ms] Apr 12 00:07:18.382: INFO: Got endpoints: latency-svc-76grh [723.047993ms] Apr 12 00:07:18.403: INFO: Created: latency-svc-2jjvt Apr 12 00:07:18.418: INFO: Got endpoints: latency-svc-2jjvt [718.778385ms] Apr 12 00:07:18.439: INFO: Created: latency-svc-w64lr Apr 12 00:07:18.481: INFO: Got endpoints: latency-svc-w64lr [745.456946ms] Apr 12 00:07:18.495: INFO: Created: latency-svc-tp42d Apr 12 00:07:18.512: INFO: Got endpoints: latency-svc-tp42d [708.391495ms] Apr 12 00:07:18.537: INFO: Created: latency-svc-brw4d Apr 12 00:07:18.554: INFO: Got endpoints: latency-svc-brw4d [726.311053ms] Apr 12 00:07:18.655: INFO: Created: latency-svc-t8zz7 Apr 12 00:07:18.672: INFO: Created: latency-svc-ndbdb Apr 12 00:07:18.674: INFO: Got endpoints: latency-svc-t8zz7 [809.103358ms] Apr 12 00:07:18.792: INFO: Got endpoints: latency-svc-ndbdb [854.287616ms] Apr 12 00:07:18.813: INFO: Created: latency-svc-dd2gf Apr 12 00:07:18.829: INFO: Got endpoints: latency-svc-dd2gf [874.772585ms] Apr 12 00:07:18.849: INFO: Created: latency-svc-8l9nn Apr 12 00:07:18.866: INFO: Got endpoints: latency-svc-8l9nn [868.672627ms] Apr 12 00:07:18.930: INFO: Created: latency-svc-db586 Apr 12 00:07:18.955: INFO: Created: latency-svc-pwr82 Apr 12 00:07:18.955: INFO: Got endpoints: latency-svc-db586 [904.618675ms] Apr 12 00:07:18.996: INFO: Got endpoints: latency-svc-pwr82 [909.623791ms] Apr 12 00:07:19.027: INFO: Created: latency-svc-j7ns9 Apr 12 00:07:19.061: INFO: Got endpoints: latency-svc-j7ns9 [936.064282ms] Apr 12 00:07:19.077: INFO: Created: latency-svc-bj8tf Apr 12 00:07:19.089: INFO: Got endpoints: latency-svc-bj8tf [874.498094ms] Apr 12 00:07:19.107: INFO: Created: latency-svc-kjlfc Apr 12 00:07:19.131: INFO: Got endpoints: latency-svc-kjlfc [855.804478ms] Apr 12 00:07:19.206: INFO: Created: latency-svc-nqx4z Apr 12 00:07:19.231: INFO: Got endpoints: latency-svc-nqx4z [864.975245ms] Apr 12 00:07:19.232: INFO: Created: latency-svc-wk467 Apr 12 00:07:19.245: INFO: Got endpoints: latency-svc-wk467 [862.897823ms] Apr 12 00:07:19.272: INFO: Created: latency-svc-4pkvg Apr 12 00:07:19.293: INFO: Got endpoints: latency-svc-4pkvg [874.87281ms] Apr 12 00:07:19.359: INFO: Created: latency-svc-ttl2j Apr 12 00:07:19.381: INFO: Got endpoints: latency-svc-ttl2j [899.839718ms] Apr 12 00:07:19.407: INFO: Created: latency-svc-b7r24 Apr 12 00:07:19.423: INFO: Got endpoints: latency-svc-b7r24 [910.553414ms] Apr 12 00:07:19.469: INFO: Created: latency-svc-lvqjw Apr 12 00:07:19.483: INFO: Got endpoints: latency-svc-lvqjw [929.321179ms] Apr 12 00:07:19.512: INFO: Created: latency-svc-rv7fd Apr 12 00:07:19.525: INFO: Got endpoints: latency-svc-rv7fd [850.610819ms] Apr 12 00:07:19.542: INFO: Created: latency-svc-662x8 Apr 12 00:07:19.554: INFO: Got endpoints: latency-svc-662x8 [762.089992ms] Apr 12 00:07:19.620: INFO: Created: latency-svc-6pbql Apr 12 00:07:19.626: INFO: Got endpoints: latency-svc-6pbql [796.591234ms] Apr 12 00:07:19.647: INFO: Created: latency-svc-s9znf Apr 12 00:07:19.658: INFO: Got endpoints: latency-svc-s9znf [792.63962ms] Apr 12 00:07:19.674: INFO: Created: latency-svc-dxjm8 Apr 12 00:07:19.689: INFO: Got endpoints: latency-svc-dxjm8 [733.775064ms] Apr 12 00:07:19.704: INFO: Created: latency-svc-djshx Apr 12 00:07:19.775: INFO: Got endpoints: latency-svc-djshx [778.220105ms] Apr 12 00:07:19.777: INFO: Created: latency-svc-jh9lp Apr 12 00:07:19.784: INFO: Got endpoints: latency-svc-jh9lp [723.033218ms] Apr 12 00:07:19.857: INFO: Created: latency-svc-rnspl Apr 12 00:07:19.862: INFO: Got endpoints: latency-svc-rnspl [772.794395ms] Apr 12 00:07:19.868: INFO: Created: latency-svc-kjfnk Apr 12 00:07:19.912: INFO: Got endpoints: latency-svc-kjfnk [780.774603ms] Apr 12 00:07:19.914: INFO: Created: latency-svc-2vwgl Apr 12 00:07:19.922: INFO: Got endpoints: latency-svc-2vwgl [690.829145ms] Apr 12 00:07:19.946: INFO: Created: latency-svc-zxprd Apr 12 00:07:19.962: INFO: Got endpoints: latency-svc-zxprd [716.394992ms] Apr 12 00:07:19.980: INFO: Created: latency-svc-rgzwm Apr 12 00:07:19.992: INFO: Got endpoints: latency-svc-rgzwm [698.099532ms] Apr 12 00:07:20.010: INFO: Created: latency-svc-2dqch Apr 12 00:07:20.038: INFO: Got endpoints: latency-svc-2dqch [657.054604ms] Apr 12 00:07:20.052: INFO: Created: latency-svc-cz49z Apr 12 00:07:20.063: INFO: Got endpoints: latency-svc-cz49z [640.812519ms] Apr 12 00:07:20.082: INFO: Created: latency-svc-wttnd Apr 12 00:07:20.094: INFO: Got endpoints: latency-svc-wttnd [610.404209ms] Apr 12 00:07:20.115: INFO: Created: latency-svc-mwlw2 Apr 12 00:07:20.129: INFO: Got endpoints: latency-svc-mwlw2 [604.546251ms] Apr 12 00:07:20.164: INFO: Created: latency-svc-6gl8t Apr 12 00:07:20.172: INFO: Got endpoints: latency-svc-6gl8t [617.732095ms] Apr 12 00:07:20.211: INFO: Created: latency-svc-9lgjh Apr 12 00:07:20.222: INFO: Got endpoints: latency-svc-9lgjh [595.915022ms] Apr 12 00:07:20.257: INFO: Created: latency-svc-t4djx Apr 12 00:07:20.289: INFO: Got endpoints: latency-svc-t4djx [630.603105ms] Apr 12 00:07:20.305: INFO: Created: latency-svc-9vhjq Apr 12 00:07:20.318: INFO: Got endpoints: latency-svc-9vhjq [629.157552ms] Apr 12 00:07:20.335: INFO: Created: latency-svc-qm7kr Apr 12 00:07:20.348: INFO: Got endpoints: latency-svc-qm7kr [573.342291ms] Apr 12 00:07:20.367: INFO: Created: latency-svc-llvsm Apr 12 00:07:20.378: INFO: Got endpoints: latency-svc-llvsm [593.195996ms] Apr 12 00:07:20.427: INFO: Created: latency-svc-brxsv Apr 12 00:07:20.451: INFO: Created: latency-svc-pdhbf Apr 12 00:07:20.451: INFO: Got endpoints: latency-svc-brxsv [588.42346ms] Apr 12 00:07:20.472: INFO: Got endpoints: latency-svc-pdhbf [560.379595ms] Apr 12 00:07:20.495: INFO: Created: latency-svc-9sq5d Apr 12 00:07:20.507: INFO: Got endpoints: latency-svc-9sq5d [584.877833ms] Apr 12 00:07:20.520: INFO: Created: latency-svc-sgl59 Apr 12 00:07:20.546: INFO: Got endpoints: latency-svc-sgl59 [584.35825ms] Apr 12 00:07:20.567: INFO: Created: latency-svc-r7cdd Apr 12 00:07:20.585: INFO: Got endpoints: latency-svc-r7cdd [593.586668ms] Apr 12 00:07:20.618: INFO: Created: latency-svc-zqxd5 Apr 12 00:07:20.678: INFO: Got endpoints: latency-svc-zqxd5 [640.385128ms] Apr 12 00:07:20.684: INFO: Created: latency-svc-7sfmk Apr 12 00:07:20.699: INFO: Got endpoints: latency-svc-7sfmk [635.301913ms] Apr 12 00:07:20.718: INFO: Created: latency-svc-drftw Apr 12 00:07:20.736: INFO: Got endpoints: latency-svc-drftw [641.884736ms] Apr 12 00:07:20.754: INFO: Created: latency-svc-tmlg2 Apr 12 00:07:20.765: INFO: Got endpoints: latency-svc-tmlg2 [635.257084ms] Apr 12 00:07:20.828: INFO: Created: latency-svc-7srgn Apr 12 00:07:20.828: INFO: Got endpoints: latency-svc-7srgn [655.634492ms] Apr 12 00:07:20.852: INFO: Created: latency-svc-zq22t Apr 12 00:07:20.863: INFO: Got endpoints: latency-svc-zq22t [641.009056ms] Apr 12 00:07:20.901: INFO: Created: latency-svc-97ghq Apr 12 00:07:20.954: INFO: Got endpoints: latency-svc-97ghq [664.812761ms] Apr 12 00:07:20.976: INFO: Created: latency-svc-vvrft Apr 12 00:07:20.989: INFO: Got endpoints: latency-svc-vvrft [670.970314ms] Apr 12 00:07:21.005: INFO: Created: latency-svc-4qq2h Apr 12 00:07:21.019: INFO: Got endpoints: latency-svc-4qq2h [671.237369ms] Apr 12 00:07:21.098: INFO: Created: latency-svc-fswpq Apr 12 00:07:21.122: INFO: Got endpoints: latency-svc-fswpq [744.261065ms] Apr 12 00:07:21.123: INFO: Created: latency-svc-l47vs Apr 12 00:07:21.136: INFO: Got endpoints: latency-svc-l47vs [684.870733ms] Apr 12 00:07:21.152: INFO: Created: latency-svc-24zvn Apr 12 00:07:21.166: INFO: Got endpoints: latency-svc-24zvn [693.934349ms] Apr 12 00:07:21.182: INFO: Created: latency-svc-v9787 Apr 12 00:07:21.196: INFO: Got endpoints: latency-svc-v9787 [688.875795ms] Apr 12 00:07:21.236: INFO: Created: latency-svc-k5gbs Apr 12 00:07:21.261: INFO: Got endpoints: latency-svc-k5gbs [714.590724ms] Apr 12 00:07:21.263: INFO: Created: latency-svc-qn7bs Apr 12 00:07:21.274: INFO: Got endpoints: latency-svc-qn7bs [688.465225ms] Apr 12 00:07:21.287: INFO: Created: latency-svc-hnm98 Apr 12 00:07:21.299: INFO: Got endpoints: latency-svc-hnm98 [620.790189ms] Apr 12 00:07:21.311: INFO: Created: latency-svc-qmcpd Apr 12 00:07:21.328: INFO: Got endpoints: latency-svc-qmcpd [628.854011ms] Apr 12 00:07:21.398: INFO: Created: latency-svc-9x4hv Apr 12 00:07:21.420: INFO: Got endpoints: latency-svc-9x4hv [684.636934ms] Apr 12 00:07:21.459: INFO: Created: latency-svc-44bc4 Apr 12 00:07:21.469: INFO: Got endpoints: latency-svc-44bc4 [703.867521ms] Apr 12 00:07:21.523: INFO: Created: latency-svc-s725f Apr 12 00:07:21.528: INFO: Got endpoints: latency-svc-s725f [699.769615ms] Apr 12 00:07:21.558: INFO: Created: latency-svc-htv4r Apr 12 00:07:21.582: INFO: Got endpoints: latency-svc-htv4r [719.255152ms] Apr 12 00:07:21.599: INFO: Created: latency-svc-xqkhb Apr 12 00:07:21.612: INFO: Got endpoints: latency-svc-xqkhb [658.232478ms] Apr 12 00:07:21.667: INFO: Created: latency-svc-49fw2 Apr 12 00:07:21.686: INFO: Created: latency-svc-bmf2t Apr 12 00:07:21.686: INFO: Got endpoints: latency-svc-49fw2 [696.491708ms] Apr 12 00:07:21.696: INFO: Got endpoints: latency-svc-bmf2t [676.900667ms] Apr 12 00:07:21.716: INFO: Created: latency-svc-2x6xh Apr 12 00:07:21.740: INFO: Got endpoints: latency-svc-2x6xh [617.619373ms] Apr 12 00:07:21.799: INFO: Created: latency-svc-rhzdq Apr 12 00:07:21.821: INFO: Got endpoints: latency-svc-rhzdq [685.369497ms] Apr 12 00:07:21.822: INFO: Created: latency-svc-nmq5h Apr 12 00:07:21.844: INFO: Got endpoints: latency-svc-nmq5h [677.60908ms] Apr 12 00:07:21.869: INFO: Created: latency-svc-9q4zw Apr 12 00:07:21.879: INFO: Got endpoints: latency-svc-9q4zw [683.010997ms] Apr 12 00:07:21.893: INFO: Created: latency-svc-w8zz4 Apr 12 00:07:21.918: INFO: Got endpoints: latency-svc-w8zz4 [657.172719ms] Apr 12 00:07:21.932: INFO: Created: latency-svc-fmqd6 Apr 12 00:07:21.948: INFO: Got endpoints: latency-svc-fmqd6 [674.177043ms] Apr 12 00:07:21.961: INFO: Created: latency-svc-5d5l5 Apr 12 00:07:21.975: INFO: Got endpoints: latency-svc-5d5l5 [675.659934ms] Apr 12 00:07:21.998: INFO: Created: latency-svc-z42rl Apr 12 00:07:22.007: INFO: Got endpoints: latency-svc-z42rl [679.633342ms] Apr 12 00:07:22.056: INFO: Created: latency-svc-5c68w Apr 12 00:07:22.073: INFO: Got endpoints: latency-svc-5c68w [652.60017ms] Apr 12 00:07:22.074: INFO: Created: latency-svc-fvlb4 Apr 12 00:07:22.097: INFO: Got endpoints: latency-svc-fvlb4 [628.4326ms] Apr 12 00:07:22.121: INFO: Created: latency-svc-nh8qb Apr 12 00:07:22.134: INFO: Got endpoints: latency-svc-nh8qb [605.92312ms] Apr 12 00:07:22.205: INFO: Created: latency-svc-pqskj Apr 12 00:07:22.211: INFO: Got endpoints: latency-svc-pqskj [628.657085ms] Apr 12 00:07:22.244: INFO: Created: latency-svc-6rmsd Apr 12 00:07:22.274: INFO: Got endpoints: latency-svc-6rmsd [661.430846ms] Apr 12 00:07:22.289: INFO: Created: latency-svc-mvkjr Apr 12 00:07:22.361: INFO: Got endpoints: latency-svc-mvkjr [675.497622ms] Apr 12 00:07:22.391: INFO: Created: latency-svc-rqgnx Apr 12 00:07:22.400: INFO: Got endpoints: latency-svc-rqgnx [704.022699ms] Apr 12 00:07:22.417: INFO: Created: latency-svc-4vbwp Apr 12 00:07:22.441: INFO: Got endpoints: latency-svc-4vbwp [701.625793ms] Apr 12 00:07:22.494: INFO: Created: latency-svc-ffbwg Apr 12 00:07:22.507: INFO: Created: latency-svc-6gr92 Apr 12 00:07:22.507: INFO: Got endpoints: latency-svc-ffbwg [686.210387ms] Apr 12 00:07:22.520: INFO: Got endpoints: latency-svc-6gr92 [676.502699ms] Apr 12 00:07:22.541: INFO: Created: latency-svc-t62ff Apr 12 00:07:22.571: INFO: Got endpoints: latency-svc-t62ff [691.365865ms] Apr 12 00:07:22.631: INFO: Created: latency-svc-pwm4n Apr 12 00:07:22.634: INFO: Got endpoints: latency-svc-pwm4n [716.190905ms] Apr 12 00:07:22.682: INFO: Created: latency-svc-jgfq8 Apr 12 00:07:22.695: INFO: Got endpoints: latency-svc-jgfq8 [746.598452ms] Apr 12 00:07:22.718: INFO: Created: latency-svc-84d7w Apr 12 00:07:22.756: INFO: Got endpoints: latency-svc-84d7w [780.853134ms] Apr 12 00:07:22.768: INFO: Created: latency-svc-hch8k Apr 12 00:07:22.781: INFO: Got endpoints: latency-svc-hch8k [773.107494ms] Apr 12 00:07:22.799: INFO: Created: latency-svc-xw8s4 Apr 12 00:07:22.810: INFO: Got endpoints: latency-svc-xw8s4 [737.222582ms] Apr 12 00:07:22.829: INFO: Created: latency-svc-zq5x5 Apr 12 00:07:22.840: INFO: Got endpoints: latency-svc-zq5x5 [742.874916ms] Apr 12 00:07:22.894: INFO: Created: latency-svc-5nxhd Apr 12 00:07:22.900: INFO: Got endpoints: latency-svc-5nxhd [766.570851ms] Apr 12 00:07:22.945: INFO: Created: latency-svc-nqx49 Apr 12 00:07:22.967: INFO: Got endpoints: latency-svc-nqx49 [755.830433ms] Apr 12 00:07:22.981: INFO: Created: latency-svc-4ljsd Apr 12 00:07:22.990: INFO: Got endpoints: latency-svc-4ljsd [716.553796ms] Apr 12 00:07:23.056: INFO: Created: latency-svc-7w6kg Apr 12 00:07:23.075: INFO: Got endpoints: latency-svc-7w6kg [713.579922ms] Apr 12 00:07:23.076: INFO: Created: latency-svc-7bxb6 Apr 12 00:07:23.090: INFO: Got endpoints: latency-svc-7bxb6 [689.681962ms] Apr 12 00:07:23.111: INFO: Created: latency-svc-xj7nb Apr 12 00:07:23.129: INFO: Got endpoints: latency-svc-xj7nb [687.380796ms] Apr 12 00:07:23.147: INFO: Created: latency-svc-f4l6z Apr 12 00:07:23.206: INFO: Got endpoints: latency-svc-f4l6z [698.195399ms] Apr 12 00:07:23.207: INFO: Created: latency-svc-4rw7p Apr 12 00:07:23.233: INFO: Got endpoints: latency-svc-4rw7p [712.784566ms] Apr 12 00:07:23.263: INFO: Created: latency-svc-rhgmn Apr 12 00:07:23.275: INFO: Got endpoints: latency-svc-rhgmn [704.520542ms] Apr 12 00:07:23.297: INFO: Created: latency-svc-xsvm7 Apr 12 00:07:23.325: INFO: Got endpoints: latency-svc-xsvm7 [690.380206ms] Apr 12 00:07:23.351: INFO: Created: latency-svc-q9fbm Apr 12 00:07:23.362: INFO: Got endpoints: latency-svc-q9fbm [666.898569ms] Apr 12 00:07:23.381: INFO: Created: latency-svc-z5lng Apr 12 00:07:23.398: INFO: Got endpoints: latency-svc-z5lng [641.843816ms] Apr 12 00:07:23.458: INFO: Created: latency-svc-sxqsj Apr 12 00:07:23.479: INFO: Got endpoints: latency-svc-sxqsj [698.527882ms] Apr 12 00:07:23.480: INFO: Created: latency-svc-zdcx6 Apr 12 00:07:23.500: INFO: Got endpoints: latency-svc-zdcx6 [689.296594ms] Apr 12 00:07:23.515: INFO: Created: latency-svc-pkvjx Apr 12 00:07:23.523: INFO: Got endpoints: latency-svc-pkvjx [682.94332ms] Apr 12 00:07:23.539: INFO: Created: latency-svc-6788z Apr 12 00:07:23.547: INFO: Got endpoints: latency-svc-6788z [647.126732ms] Apr 12 00:07:23.603: INFO: Created: latency-svc-wzggc Apr 12 00:07:23.616: INFO: Got endpoints: latency-svc-wzggc [649.382543ms] Apr 12 00:07:23.632: INFO: Created: latency-svc-jl94c Apr 12 00:07:23.647: INFO: Got endpoints: latency-svc-jl94c [656.970851ms] Apr 12 00:07:23.662: INFO: Created: latency-svc-mstzc Apr 12 00:07:23.676: INFO: Got endpoints: latency-svc-mstzc [601.506346ms] Apr 12 00:07:23.714: INFO: Created: latency-svc-vgzhn Apr 12 00:07:23.731: INFO: Got endpoints: latency-svc-vgzhn [640.621112ms] Apr 12 00:07:23.732: INFO: Created: latency-svc-qstnx Apr 12 00:07:23.755: INFO: Got endpoints: latency-svc-qstnx [626.170943ms] Apr 12 00:07:23.779: INFO: Created: latency-svc-fgfjc Apr 12 00:07:23.791: INFO: Got endpoints: latency-svc-fgfjc [585.002137ms] Apr 12 00:07:23.812: INFO: Created: latency-svc-gfx5m Apr 12 00:07:23.847: INFO: Got endpoints: latency-svc-gfx5m [613.568076ms] Apr 12 00:07:23.860: INFO: Created: latency-svc-tkrxk Apr 12 00:07:23.890: INFO: Got endpoints: latency-svc-tkrxk [614.973951ms] Apr 12 00:07:23.927: INFO: Created: latency-svc-j966n Apr 12 00:07:23.943: INFO: Got endpoints: latency-svc-j966n [617.911535ms] Apr 12 00:07:23.983: INFO: Created: latency-svc-tpvhk Apr 12 00:07:23.990: INFO: Got endpoints: latency-svc-tpvhk [628.666361ms] Apr 12 00:07:24.007: INFO: Created: latency-svc-lg8cz Apr 12 00:07:24.021: INFO: Got endpoints: latency-svc-lg8cz [622.771387ms] Apr 12 00:07:24.037: INFO: Created: latency-svc-k24hz Apr 12 00:07:24.051: INFO: Got endpoints: latency-svc-k24hz [571.452477ms] Apr 12 00:07:24.064: INFO: Created: latency-svc-qjcvv Apr 12 00:07:24.081: INFO: Got endpoints: latency-svc-qjcvv [580.942376ms] Apr 12 00:07:24.121: INFO: Created: latency-svc-7v9rm Apr 12 00:07:24.142: INFO: Created: latency-svc-j7xbr Apr 12 00:07:24.142: INFO: Got endpoints: latency-svc-7v9rm [619.324958ms] Apr 12 00:07:24.156: INFO: Got endpoints: latency-svc-j7xbr [608.73635ms] Apr 12 00:07:24.172: INFO: Created: latency-svc-45qzl Apr 12 00:07:24.186: INFO: Got endpoints: latency-svc-45qzl [569.408675ms] Apr 12 00:07:24.247: INFO: Created: latency-svc-vblv5 Apr 12 00:07:24.264: INFO: Got endpoints: latency-svc-vblv5 [617.087357ms] Apr 12 00:07:24.265: INFO: Created: latency-svc-gh97l Apr 12 00:07:24.288: INFO: Got endpoints: latency-svc-gh97l [611.359093ms] Apr 12 00:07:24.307: INFO: Created: latency-svc-twjbs Apr 12 00:07:24.318: INFO: Got endpoints: latency-svc-twjbs [587.25148ms] Apr 12 00:07:24.334: INFO: Created: latency-svc-5qvl4 Apr 12 00:07:24.385: INFO: Got endpoints: latency-svc-5qvl4 [629.728242ms] Apr 12 00:07:24.400: INFO: Created: latency-svc-bqzdl Apr 12 00:07:24.426: INFO: Got endpoints: latency-svc-bqzdl [635.421741ms] Apr 12 00:07:24.450: INFO: Created: latency-svc-pdpsc Apr 12 00:07:24.464: INFO: Got endpoints: latency-svc-pdpsc [617.414908ms] Apr 12 00:07:24.482: INFO: Created: latency-svc-hq7dm Apr 12 00:07:24.523: INFO: Got endpoints: latency-svc-hq7dm [632.403411ms] Apr 12 00:07:24.553: INFO: Created: latency-svc-9vqp7 Apr 12 00:07:24.566: INFO: Got endpoints: latency-svc-9vqp7 [622.968016ms] Apr 12 00:07:24.586: INFO: Created: latency-svc-g4m8q Apr 12 00:07:24.602: INFO: Got endpoints: latency-svc-g4m8q [611.514402ms] Apr 12 00:07:24.655: INFO: Created: latency-svc-dwwxl Apr 12 00:07:24.682: INFO: Got endpoints: latency-svc-dwwxl [661.553547ms] Apr 12 00:07:24.708: INFO: Created: latency-svc-65t6r Apr 12 00:07:24.722: INFO: Got endpoints: latency-svc-65t6r [671.251444ms] Apr 12 00:07:24.738: INFO: Created: latency-svc-nztl4 Apr 12 00:07:24.751: INFO: Got endpoints: latency-svc-nztl4 [670.776201ms] Apr 12 00:07:24.786: INFO: Created: latency-svc-nqgb8 Apr 12 00:07:24.791: INFO: Got endpoints: latency-svc-nqgb8 [648.605523ms] Apr 12 00:07:24.810: INFO: Created: latency-svc-2tsdb Apr 12 00:07:24.827: INFO: Got endpoints: latency-svc-2tsdb [670.921462ms] Apr 12 00:07:24.844: INFO: Created: latency-svc-7zrsh Apr 12 00:07:24.857: INFO: Got endpoints: latency-svc-7zrsh [670.887094ms] Apr 12 00:07:24.874: INFO: Created: latency-svc-4lcgc Apr 12 00:07:24.918: INFO: Got endpoints: latency-svc-4lcgc [653.268089ms] Apr 12 00:07:24.936: INFO: Created: latency-svc-jcdsb Apr 12 00:07:24.953: INFO: Got endpoints: latency-svc-jcdsb [665.08997ms] Apr 12 00:07:24.972: INFO: Created: latency-svc-6dqx9 Apr 12 00:07:24.989: INFO: Got endpoints: latency-svc-6dqx9 [671.340506ms] Apr 12 00:07:25.008: INFO: Created: latency-svc-5b5zz Apr 12 00:07:25.044: INFO: Got endpoints: latency-svc-5b5zz [658.551557ms] Apr 12 00:07:25.056: INFO: Created: latency-svc-jgb8p Apr 12 00:07:25.069: INFO: Got endpoints: latency-svc-jgb8p [642.844644ms] Apr 12 00:07:25.090: INFO: Created: latency-svc-gdddr Apr 12 00:07:25.105: INFO: Got endpoints: latency-svc-gdddr [640.983058ms] Apr 12 00:07:25.126: INFO: Created: latency-svc-wncdr Apr 12 00:07:25.141: INFO: Got endpoints: latency-svc-wncdr [618.690652ms] Apr 12 00:07:25.181: INFO: Created: latency-svc-48m4k Apr 12 00:07:25.189: INFO: Got endpoints: latency-svc-48m4k [623.430467ms] Apr 12 00:07:25.210: INFO: Created: latency-svc-lw2jx Apr 12 00:07:25.225: INFO: Got endpoints: latency-svc-lw2jx [623.142978ms] Apr 12 00:07:25.244: INFO: Created: latency-svc-5zxtz Apr 12 00:07:25.266: INFO: Got endpoints: latency-svc-5zxtz [583.85727ms] Apr 12 00:07:25.313: INFO: Created: latency-svc-bz4rv Apr 12 00:07:25.319: INFO: Got endpoints: latency-svc-bz4rv [596.832582ms] Apr 12 00:07:25.348: INFO: Created: latency-svc-kvnf2 Apr 12 00:07:25.360: INFO: Got endpoints: latency-svc-kvnf2 [608.680414ms] Apr 12 00:07:25.377: INFO: Created: latency-svc-nzjl4 Apr 12 00:07:25.396: INFO: Got endpoints: latency-svc-nzjl4 [605.310388ms] Apr 12 00:07:25.433: INFO: Created: latency-svc-9gghs Apr 12 00:07:25.452: INFO: Created: latency-svc-pwhfk Apr 12 00:07:25.452: INFO: Got endpoints: latency-svc-9gghs [625.047336ms] Apr 12 00:07:25.468: INFO: Got endpoints: latency-svc-pwhfk [611.342243ms] Apr 12 00:07:25.494: INFO: Created: latency-svc-qrfhq Apr 12 00:07:25.511: INFO: Got endpoints: latency-svc-qrfhq [592.748213ms] Apr 12 00:07:25.530: INFO: Created: latency-svc-d7cks Apr 12 00:07:25.565: INFO: Got endpoints: latency-svc-d7cks [611.815853ms] Apr 12 00:07:25.606: INFO: Created: latency-svc-qpfth Apr 12 00:07:25.620: INFO: Got endpoints: latency-svc-qpfth [630.704707ms] Apr 12 00:07:25.641: INFO: Created: latency-svc-2lstb Apr 12 00:07:25.656: INFO: Got endpoints: latency-svc-2lstb [612.597969ms] Apr 12 00:07:25.702: INFO: Created: latency-svc-9q4h4 Apr 12 00:07:25.704: INFO: Got endpoints: latency-svc-9q4h4 [634.830343ms] Apr 12 00:07:25.726: INFO: Created: latency-svc-bttk5 Apr 12 00:07:25.740: INFO: Got endpoints: latency-svc-bttk5 [635.190641ms] Apr 12 00:07:25.758: INFO: Created: latency-svc-rrbml Apr 12 00:07:25.770: INFO: Got endpoints: latency-svc-rrbml [628.746577ms] Apr 12 00:07:25.788: INFO: Created: latency-svc-9zgch Apr 12 00:07:25.822: INFO: Got endpoints: latency-svc-9zgch [632.359152ms] Apr 12 00:07:25.824: INFO: Created: latency-svc-xnl9l Apr 12 00:07:25.836: INFO: Got endpoints: latency-svc-xnl9l [610.815493ms] Apr 12 00:07:25.854: INFO: Created: latency-svc-pmvhp Apr 12 00:07:25.870: INFO: Got endpoints: latency-svc-pmvhp [603.743963ms] Apr 12 00:07:25.887: INFO: Created: latency-svc-btlqg Apr 12 00:07:25.899: INFO: Got endpoints: latency-svc-btlqg [580.592088ms] Apr 12 00:07:25.917: INFO: Created: latency-svc-rwkr7 Apr 12 00:07:25.942: INFO: Got endpoints: latency-svc-rwkr7 [581.544196ms] Apr 12 00:07:25.954: INFO: Created: latency-svc-pr94f Apr 12 00:07:25.966: INFO: Got endpoints: latency-svc-pr94f [569.361271ms] Apr 12 00:07:25.983: INFO: Created: latency-svc-tsxfp Apr 12 00:07:25.996: INFO: Got endpoints: latency-svc-tsxfp [543.437924ms] Apr 12 00:07:26.016: INFO: Created: latency-svc-qdcr6 Apr 12 00:07:26.032: INFO: Got endpoints: latency-svc-qdcr6 [563.420105ms] Apr 12 00:07:26.068: INFO: Created: latency-svc-qm8rt Apr 12 00:07:26.088: INFO: Created: latency-svc-b8l58 Apr 12 00:07:26.088: INFO: Got endpoints: latency-svc-qm8rt [577.228544ms] Apr 12 00:07:26.101: INFO: Got endpoints: latency-svc-b8l58 [536.117337ms] Apr 12 00:07:26.118: INFO: Created: latency-svc-dv6m8 Apr 12 00:07:26.130: INFO: Got endpoints: latency-svc-dv6m8 [509.63734ms] Apr 12 00:07:26.145: INFO: Created: latency-svc-hlkxq Apr 12 00:07:26.160: INFO: Got endpoints: latency-svc-hlkxq [503.461102ms] Apr 12 00:07:26.218: INFO: Created: latency-svc-pdssx Apr 12 00:07:26.254: INFO: Got endpoints: latency-svc-pdssx [549.598069ms] Apr 12 00:07:26.254: INFO: Created: latency-svc-m7rkt Apr 12 00:07:26.280: INFO: Got endpoints: latency-svc-m7rkt [539.585106ms] Apr 12 00:07:26.304: INFO: Created: latency-svc-mt7bn Apr 12 00:07:26.355: INFO: Got endpoints: latency-svc-mt7bn [584.37751ms] Apr 12 00:07:26.357: INFO: Created: latency-svc-vwqmm Apr 12 00:07:26.370: INFO: Got endpoints: latency-svc-vwqmm [547.915872ms] Apr 12 00:07:26.385: INFO: Created: latency-svc-ljd9f Apr 12 00:07:26.397: INFO: Got endpoints: latency-svc-ljd9f [560.945515ms] Apr 12 00:07:26.434: INFO: Created: latency-svc-mk786 Apr 12 00:07:26.474: INFO: Got endpoints: latency-svc-mk786 [604.584603ms] Apr 12 00:07:26.487: INFO: Created: latency-svc-n4wvv Apr 12 00:07:26.499: INFO: Got endpoints: latency-svc-n4wvv [599.146343ms] Apr 12 00:07:26.519: INFO: Created: latency-svc-fkqcm Apr 12 00:07:26.535: INFO: Got endpoints: latency-svc-fkqcm [593.15216ms] Apr 12 00:07:26.562: INFO: Created: latency-svc-dcmvm Apr 12 00:07:26.618: INFO: Got endpoints: latency-svc-dcmvm [652.472752ms] Apr 12 00:07:26.619: INFO: Latencies: [40.527806ms 76.223289ms 143.814068ms 167.938016ms 205.271276ms 278.509992ms 295.040982ms 337.472729ms 391.196271ms 427.105643ms 465.334469ms 503.461102ms 509.63734ms 536.117337ms 539.585106ms 543.437924ms 547.915872ms 549.598069ms 555.030974ms 560.379595ms 560.945515ms 563.420105ms 569.361271ms 569.408675ms 571.452477ms 573.342291ms 577.228544ms 580.592088ms 580.942376ms 581.544196ms 583.85727ms 584.35825ms 584.37751ms 584.877833ms 585.002137ms 587.25148ms 588.42346ms 592.748213ms 593.15216ms 593.195996ms 593.586668ms 595.915022ms 596.832582ms 599.146343ms 601.506346ms 603.743963ms 604.546251ms 604.584603ms 605.310388ms 605.92312ms 608.680414ms 608.73635ms 610.404209ms 610.815493ms 611.342243ms 611.359093ms 611.514402ms 611.815853ms 612.597969ms 613.568076ms 614.973951ms 615.082431ms 617.087357ms 617.414908ms 617.619373ms 617.732095ms 617.911535ms 618.690652ms 619.324958ms 620.790189ms 622.771387ms 622.968016ms 623.142978ms 623.430467ms 625.047336ms 626.170943ms 628.4326ms 628.657085ms 628.666361ms 628.746577ms 628.854011ms 629.157552ms 629.728242ms 630.603105ms 630.704707ms 632.359152ms 632.403411ms 634.830343ms 635.190641ms 635.257084ms 635.301913ms 635.421741ms 640.385128ms 640.621112ms 640.812519ms 640.983058ms 641.009056ms 641.843816ms 641.884736ms 642.844644ms 647.126732ms 648.605523ms 649.382543ms 652.472752ms 652.60017ms 653.268089ms 655.634492ms 656.970851ms 657.054604ms 657.172719ms 658.232478ms 658.551557ms 661.430846ms 661.553547ms 664.812761ms 665.08997ms 666.898569ms 670.776201ms 670.887094ms 670.921462ms 670.970314ms 671.237369ms 671.251444ms 671.340506ms 674.177043ms 675.497622ms 675.659934ms 676.502699ms 676.900667ms 677.60908ms 679.633342ms 682.94332ms 683.010997ms 684.636934ms 684.870733ms 685.369497ms 686.210387ms 687.380796ms 688.465225ms 688.875795ms 689.296594ms 689.681962ms 690.380206ms 690.829145ms 691.365865ms 693.934349ms 696.491708ms 698.099532ms 698.195399ms 698.527882ms 699.769615ms 701.625793ms 703.867521ms 704.022699ms 704.520542ms 706.450184ms 708.391495ms 712.784566ms 713.579922ms 714.590724ms 716.190905ms 716.394992ms 716.553796ms 718.778385ms 719.255152ms 723.033218ms 723.047993ms 726.311053ms 733.775064ms 737.222582ms 742.874916ms 744.261065ms 745.456946ms 746.598452ms 755.830433ms 762.089992ms 766.570851ms 772.794395ms 773.107494ms 778.220105ms 780.774603ms 780.853134ms 792.63962ms 796.591234ms 809.103358ms 850.610819ms 854.287616ms 855.804478ms 862.897823ms 864.975245ms 868.672627ms 874.498094ms 874.772585ms 874.87281ms 899.839718ms 904.618675ms 909.623791ms 910.553414ms 929.321179ms 936.064282ms] Apr 12 00:07:26.619: INFO: 50 %ile: 647.126732ms Apr 12 00:07:26.619: INFO: 90 %ile: 780.774603ms Apr 12 00:07:26.619: INFO: 99 %ile: 929.321179ms Apr 12 00:07:26.619: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:26.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5547" for this suite. • [SLOW TEST:13.290 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":97,"skipped":1497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:26.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:07:26.668: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:27.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4947" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":98,"skipped":1616,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:27.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 12 00:07:36.162: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 12 00:07:36.180: INFO: Pod pod-with-poststart-exec-hook still exists Apr 12 00:07:38.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 12 00:07:38.183: INFO: Pod pod-with-poststart-exec-hook still exists Apr 12 00:07:40.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 12 00:07:40.206: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:40.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8049" for this suite. • [SLOW TEST:12.347 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1628,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:40.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-16" for this suite. • [SLOW TEST:13.301 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":100,"skipped":1633,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:53.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:07:53.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:07:55.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246873, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246873, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:07:58.942: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:07:58.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7212" for this suite. STEP: Destroying namespace "webhook-7212-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.554 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":101,"skipped":1638,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:07:59.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 12 00:07:59.143: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:08:05.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3891" for this suite. • [SLOW TEST:6.460 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":102,"skipped":1640,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:08:05.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4112, will wait for the garbage collector to delete the pods Apr 12 00:08:09.679: INFO: Deleting Job.batch foo took: 6.547838ms Apr 12 00:08:09.779: INFO: Terminating Job.batch foo pods took: 100.26887ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:08:52.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4112" for this suite. • [SLOW TEST:47.266 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":103,"skipped":1652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:08:52.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 12 00:09:00.995: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:01.015: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:03.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:03.019: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:05.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:05.020: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:07.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:07.020: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:09.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:09.019: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:11.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:11.020: INFO: Pod pod-with-prestop-exec-hook still exists Apr 12 00:09:13.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 12 00:09:13.018: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2306" for this suite. • [SLOW TEST:20.240 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1704,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:13.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-12a86179-3a3f-41fc-bf82-82f88a8bdcac STEP: Creating a pod to test consume configMaps Apr 12 00:09:13.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991" in namespace "projected-5027" to be "Succeeded or Failed" Apr 12 00:09:13.159: INFO: Pod "pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172085ms Apr 12 00:09:15.163: INFO: Pod "pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007223445s Apr 12 00:09:17.167: INFO: Pod "pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011437289s STEP: Saw pod success Apr 12 00:09:17.167: INFO: Pod "pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991" satisfied condition "Succeeded or Failed" Apr 12 00:09:17.170: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991 container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:09:17.220: INFO: Waiting for pod pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991 to disappear Apr 12 00:09:17.234: INFO: Pod pod-projected-configmaps-cb7d8a57-312f-4a56-acc7-8b0eb56cd991 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:17.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5027" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1722,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:17.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 12 00:09:17.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6409' Apr 12 00:09:17.623: INFO: stderr: "" Apr 12 00:09:17.623: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 12 00:09:17.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:17.758: INFO: stderr: "" Apr 12 00:09:17.758: INFO: stdout: "update-demo-nautilus-mpkdr update-demo-nautilus-z9m44 " Apr 12 00:09:17.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpkdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:17.846: INFO: stderr: "" Apr 12 00:09:17.846: INFO: stdout: "" Apr 12 00:09:17.846: INFO: update-demo-nautilus-mpkdr is created but not running Apr 12 00:09:22.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:22.945: INFO: stderr: "" Apr 12 00:09:22.945: INFO: stdout: "update-demo-nautilus-mpkdr update-demo-nautilus-z9m44 " Apr 12 00:09:22.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpkdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:23.040: INFO: stderr: "" Apr 12 00:09:23.040: INFO: stdout: "true" Apr 12 00:09:23.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpkdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:23.134: INFO: stderr: "" Apr 12 00:09:23.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:09:23.134: INFO: validating pod update-demo-nautilus-mpkdr Apr 12 00:09:23.138: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:09:23.138: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:09:23.138: INFO: update-demo-nautilus-mpkdr is verified up and running Apr 12 00:09:23.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:23.233: INFO: stderr: "" Apr 12 00:09:23.233: INFO: stdout: "true" Apr 12 00:09:23.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:23.336: INFO: stderr: "" Apr 12 00:09:23.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:09:23.336: INFO: validating pod update-demo-nautilus-z9m44 Apr 12 00:09:23.340: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:09:23.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:09:23.340: INFO: update-demo-nautilus-z9m44 is verified up and running STEP: scaling down the replication controller Apr 12 00:09:23.343: INFO: scanned /root for discovery docs: Apr 12 00:09:23.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6409' Apr 12 00:09:24.456: INFO: stderr: "" Apr 12 00:09:24.456: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 12 00:09:24.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:24.554: INFO: stderr: "" Apr 12 00:09:24.554: INFO: stdout: "update-demo-nautilus-mpkdr update-demo-nautilus-z9m44 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 12 00:09:29.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:29.646: INFO: stderr: "" Apr 12 00:09:29.646: INFO: stdout: "update-demo-nautilus-z9m44 " Apr 12 00:09:29.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:29.735: INFO: stderr: "" Apr 12 00:09:29.735: INFO: stdout: "true" Apr 12 00:09:29.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:29.842: INFO: stderr: "" Apr 12 00:09:29.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:09:29.842: INFO: validating pod update-demo-nautilus-z9m44 Apr 12 00:09:29.846: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:09:29.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:09:29.846: INFO: update-demo-nautilus-z9m44 is verified up and running STEP: scaling up the replication controller Apr 12 00:09:29.849: INFO: scanned /root for discovery docs: Apr 12 00:09:29.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6409' Apr 12 00:09:30.974: INFO: stderr: "" Apr 12 00:09:30.974: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 12 00:09:30.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:31.076: INFO: stderr: "" Apr 12 00:09:31.076: INFO: stdout: "update-demo-nautilus-24bf5 update-demo-nautilus-z9m44 " Apr 12 00:09:31.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24bf5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:31.166: INFO: stderr: "" Apr 12 00:09:31.166: INFO: stdout: "" Apr 12 00:09:31.166: INFO: update-demo-nautilus-24bf5 is created but not running Apr 12 00:09:36.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6409' Apr 12 00:09:36.264: INFO: stderr: "" Apr 12 00:09:36.264: INFO: stdout: "update-demo-nautilus-24bf5 update-demo-nautilus-z9m44 " Apr 12 00:09:36.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24bf5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:36.367: INFO: stderr: "" Apr 12 00:09:36.367: INFO: stdout: "true" Apr 12 00:09:36.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24bf5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:36.476: INFO: stderr: "" Apr 12 00:09:36.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:09:36.476: INFO: validating pod update-demo-nautilus-24bf5 Apr 12 00:09:36.480: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:09:36.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:09:36.480: INFO: update-demo-nautilus-24bf5 is verified up and running Apr 12 00:09:36.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:36.584: INFO: stderr: "" Apr 12 00:09:36.584: INFO: stdout: "true" Apr 12 00:09:36.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9m44 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6409' Apr 12 00:09:36.683: INFO: stderr: "" Apr 12 00:09:36.683: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:09:36.683: INFO: validating pod update-demo-nautilus-z9m44 Apr 12 00:09:36.686: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:09:36.686: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:09:36.686: INFO: update-demo-nautilus-z9m44 is verified up and running STEP: using delete to clean up resources Apr 12 00:09:36.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6409' Apr 12 00:09:36.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:09:36.805: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 12 00:09:36.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6409' Apr 12 00:09:36.903: INFO: stderr: "No resources found in kubectl-6409 namespace.\n" Apr 12 00:09:36.903: INFO: stdout: "" Apr 12 00:09:36.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6409 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 12 00:09:37.027: INFO: stderr: "" Apr 12 00:09:37.028: INFO: stdout: "update-demo-nautilus-24bf5\nupdate-demo-nautilus-z9m44\n" Apr 12 00:09:37.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6409' Apr 12 00:09:37.623: INFO: stderr: "No resources found in kubectl-6409 namespace.\n" Apr 12 00:09:37.623: INFO: stdout: "" Apr 12 00:09:37.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6409 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 12 00:09:37.723: INFO: stderr: "" Apr 12 00:09:37.723: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:37.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6409" for this suite. • [SLOW TEST:20.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":106,"skipped":1727,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:37.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-15610160-2934-4ee7-bcb6-552233a2f288 STEP: Creating a pod to test consume configMaps Apr 12 00:09:37.953: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec" in namespace "projected-9524" to be "Succeeded or Failed" Apr 12 00:09:37.956: INFO: Pod "pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.148776ms Apr 12 00:09:39.960: INFO: Pod "pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007484957s Apr 12 00:09:41.965: INFO: Pod "pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011603768s STEP: Saw pod success Apr 12 00:09:41.965: INFO: Pod "pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec" satisfied condition "Succeeded or Failed" Apr 12 00:09:41.967: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:09:41.984: INFO: Waiting for pod pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec to disappear Apr 12 00:09:41.998: INFO: Pod pod-projected-configmaps-d67bc788-2919-4712-a701-b23eccdf03ec no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:41.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9524" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:42.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:09:42.432: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:09:44.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246982, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722246982, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:09:47.478: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:47.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4094" for this suite. STEP: Destroying namespace "webhook-4094-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.634 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":108,"skipped":1803,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:47.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:09:47.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 12 00:09:48.007: INFO: stderr: "" Apr 12 00:09:48.007: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:09:48.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6873" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":109,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:09:48.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0412 00:10:28.769208 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 12 00:10:28.769: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:10:28.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-934" for this suite. • [SLOW TEST:40.760 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":110,"skipped":1819,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:10:28.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:10:28.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9026" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":111,"skipped":1831,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:10:28.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:10:36.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4834" for this suite. • [SLOW TEST:7.228 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":112,"skipped":1842,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:10:36.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.115.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.115.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.115.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.115.140_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.115.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.115.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.115.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.115.140_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:10:42.640: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.646: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.649: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.671: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.674: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.677: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:42.699: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:10:47.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.742: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.745: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.747: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.749: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:47.763: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:10:52.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.735: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.738: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.742: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:52.758: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:10:57.704: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.710: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.713: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.732: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.735: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.737: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:10:57.759: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:11:02.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.735: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.738: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.743: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:02.763: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:11:07.704: INFO: Unable to read wheezy_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.707: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.710: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.712: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.731: INFO: Unable to read jessie_udp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.734: INFO: Unable to read jessie_tcp@dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.738: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local from pod dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1: the server could not find the requested resource (get pods dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1) Apr 12 00:11:07.753: INFO: Lookups using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 failed for: [wheezy_udp@dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@dns-test-service.dns-1368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_udp@dns-test-service.dns-1368.svc.cluster.local jessie_tcp@dns-test-service.dns-1368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1368.svc.cluster.local] Apr 12 00:11:12.794: INFO: DNS probes using dns-1368/dns-test-39ce94ed-078f-49a5-b75b-f1db0614cfa1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:13.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1368" for this suite. • [SLOW TEST:37.255 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":113,"skipped":1847,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:13.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 12 00:11:13.547: INFO: >>> kubeConfig: /root/.kube/config Apr 12 00:11:16.465: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:25.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3602" for this suite. • [SLOW TEST:12.719 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":114,"skipped":1854,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:26.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:11:26.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6" in namespace "projected-4050" to be "Succeeded or Failed" Apr 12 00:11:26.118: INFO: Pod "downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108908ms Apr 12 00:11:28.124: INFO: Pod "downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010048578s Apr 12 00:11:30.131: INFO: Pod "downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016528542s STEP: Saw pod success Apr 12 00:11:30.131: INFO: Pod "downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6" satisfied condition "Succeeded or Failed" Apr 12 00:11:30.136: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6 container client-container: STEP: delete the pod Apr 12 00:11:30.176: INFO: Waiting for pod downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6 to disappear Apr 12 00:11:30.190: INFO: Pod downwardapi-volume-df2673a3-f185-43a2-a84f-3948a224fca6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:30.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4050" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1863,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:30.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-14835d16-7e2b-4bef-b59f-2fb6d32b65ee STEP: Creating a pod to test consume secrets Apr 12 00:11:30.275: INFO: Waiting up to 5m0s for pod "pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5" in namespace "secrets-1215" to be "Succeeded or Failed" Apr 12 00:11:30.279: INFO: Pod "pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99911ms Apr 12 00:11:32.299: INFO: Pod "pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023136466s Apr 12 00:11:34.302: INFO: Pod "pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026186183s STEP: Saw pod success Apr 12 00:11:34.302: INFO: Pod "pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5" satisfied condition "Succeeded or Failed" Apr 12 00:11:34.304: INFO: Trying to get logs from node latest-worker pod pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5 container secret-volume-test: STEP: delete the pod Apr 12 00:11:34.333: INFO: Waiting for pod pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5 to disappear Apr 12 00:11:34.348: INFO: Pod pod-secrets-9bbda6b5-8237-43bf-aa45-b95e00f536b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:34.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1215" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1867,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:34.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:11:35.183: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:11:37.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247095, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247095, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247095, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247095, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:11:40.263: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:40.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2136" for this suite. STEP: Destroying namespace "webhook-2136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.135 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":117,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:40.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:11:41.266: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:11:43.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247101, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247101, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247101, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247101, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:11:46.308: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 12 00:11:46.327: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:46.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4043" for this suite. STEP: Destroying namespace "webhook-4043-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.935 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":118,"skipped":1904,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:46.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 12 00:11:51.066: INFO: Successfully updated pod "pod-update-activedeadlineseconds-689695c5-32e9-4b00-8c3b-febecaf0f27e" Apr 12 00:11:51.066: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-689695c5-32e9-4b00-8c3b-febecaf0f27e" in namespace "pods-3434" to be "terminated due to deadline exceeded" Apr 12 00:11:51.087: INFO: Pod "pod-update-activedeadlineseconds-689695c5-32e9-4b00-8c3b-febecaf0f27e": Phase="Running", Reason="", readiness=true. Elapsed: 20.375338ms Apr 12 00:11:53.091: INFO: Pod "pod-update-activedeadlineseconds-689695c5-32e9-4b00-8c3b-febecaf0f27e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024399557s Apr 12 00:11:53.091: INFO: Pod "pod-update-activedeadlineseconds-689695c5-32e9-4b00-8c3b-febecaf0f27e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:11:53.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3434" for this suite. • [SLOW TEST:6.673 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1911,"failed":0} S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:11:53.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-3524 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3524 to expose endpoints map[] Apr 12 00:11:53.173: INFO: Get endpoints failed (10.91808ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 12 00:11:54.177: INFO: successfully validated that service endpoint-test2 in namespace services-3524 exposes endpoints map[] (1.01467523s elapsed) STEP: Creating pod pod1 in namespace services-3524 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3524 to expose endpoints map[pod1:[80]] Apr 12 00:11:57.251: INFO: successfully validated that service endpoint-test2 in namespace services-3524 exposes endpoints map[pod1:[80]] (3.067434828s elapsed) STEP: Creating pod pod2 in namespace services-3524 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3524 to expose endpoints map[pod1:[80] pod2:[80]] Apr 12 00:12:00.430: INFO: successfully validated that service endpoint-test2 in namespace services-3524 exposes endpoints map[pod1:[80] pod2:[80]] (3.15546469s elapsed) STEP: Deleting pod pod1 in namespace services-3524 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3524 to expose endpoints map[pod2:[80]] Apr 12 00:12:01.456: INFO: successfully validated that service endpoint-test2 in namespace services-3524 exposes endpoints map[pod2:[80]] (1.02140919s elapsed) STEP: Deleting pod pod2 in namespace services-3524 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3524 to expose endpoints map[] Apr 12 00:12:02.614: INFO: successfully validated that service endpoint-test2 in namespace services-3524 exposes endpoints map[] (1.15294562s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:02.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3524" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.682 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":120,"skipped":1912,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:02.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:02.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1705" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":121,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:02.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 12 00:12:03.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8648' Apr 12 00:12:03.321: INFO: stderr: "" Apr 12 00:12:03.321: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 12 00:12:03.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8648' Apr 12 00:12:03.418: INFO: stderr: "" Apr 12 00:12:03.418: INFO: stdout: "update-demo-nautilus-5jm29 update-demo-nautilus-fw64f " Apr 12 00:12:03.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jm29 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8648' Apr 12 00:12:03.502: INFO: stderr: "" Apr 12 00:12:03.502: INFO: stdout: "" Apr 12 00:12:03.502: INFO: update-demo-nautilus-5jm29 is created but not running Apr 12 00:12:08.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8648' Apr 12 00:12:08.601: INFO: stderr: "" Apr 12 00:12:08.601: INFO: stdout: "update-demo-nautilus-5jm29 update-demo-nautilus-fw64f " Apr 12 00:12:08.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jm29 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8648' Apr 12 00:12:08.690: INFO: stderr: "" Apr 12 00:12:08.690: INFO: stdout: "true" Apr 12 00:12:08.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jm29 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8648' Apr 12 00:12:08.788: INFO: stderr: "" Apr 12 00:12:08.788: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:12:08.788: INFO: validating pod update-demo-nautilus-5jm29 Apr 12 00:12:08.792: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:12:08.792: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:12:08.792: INFO: update-demo-nautilus-5jm29 is verified up and running Apr 12 00:12:08.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw64f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8648' Apr 12 00:12:08.891: INFO: stderr: "" Apr 12 00:12:08.891: INFO: stdout: "true" Apr 12 00:12:08.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw64f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8648' Apr 12 00:12:08.972: INFO: stderr: "" Apr 12 00:12:08.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 12 00:12:08.972: INFO: validating pod update-demo-nautilus-fw64f Apr 12 00:12:08.976: INFO: got data: { "image": "nautilus.jpg" } Apr 12 00:12:08.976: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 12 00:12:08.976: INFO: update-demo-nautilus-fw64f is verified up and running STEP: using delete to clean up resources Apr 12 00:12:08.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8648' Apr 12 00:12:09.077: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:12:09.077: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 12 00:12:09.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8648' Apr 12 00:12:09.174: INFO: stderr: "No resources found in kubectl-8648 namespace.\n" Apr 12 00:12:09.175: INFO: stdout: "" Apr 12 00:12:09.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 12 00:12:09.271: INFO: stderr: "" Apr 12 00:12:09.271: INFO: stdout: "update-demo-nautilus-5jm29\nupdate-demo-nautilus-fw64f\n" Apr 12 00:12:09.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8648' Apr 12 00:12:09.992: INFO: stderr: "No resources found in kubectl-8648 namespace.\n" Apr 12 00:12:09.992: INFO: stdout: "" Apr 12 00:12:09.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 12 00:12:10.090: INFO: stderr: "" Apr 12 00:12:10.090: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:10.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8648" for this suite. • [SLOW TEST:7.144 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":122,"skipped":1959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:10.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 12 00:12:10.284: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 12 00:12:10.293: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 12 00:12:10.293: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 12 00:12:10.305: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 12 00:12:10.305: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 12 00:12:10.513: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 12 00:12:10.513: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 12 00:12:18.257: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:18.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1629" for this suite. • [SLOW TEST:8.207 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":123,"skipped":1987,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:18.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:12:18.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d" in namespace "downward-api-2458" to be "Succeeded or Failed" Apr 12 00:12:18.427: INFO: Pod "downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.319234ms Apr 12 00:12:20.431: INFO: Pod "downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015110684s Apr 12 00:12:22.435: INFO: Pod "downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01922599s STEP: Saw pod success Apr 12 00:12:22.435: INFO: Pod "downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d" satisfied condition "Succeeded or Failed" Apr 12 00:12:22.438: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d container client-container: STEP: delete the pod Apr 12 00:12:22.547: INFO: Waiting for pod downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d to disappear Apr 12 00:12:22.601: INFO: Pod downwardapi-volume-87f4e552-ae2d-4a06-a64a-9cad10194a3d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:22.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2458" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":1993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:22.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:22.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-510" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":125,"skipped":2023,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:22.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:12:22.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9" in namespace "projected-3337" to be "Succeeded or Failed" Apr 12 00:12:22.846: INFO: Pod "downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472256ms Apr 12 00:12:24.850: INFO: Pod "downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007205748s Apr 12 00:12:26.855: INFO: Pod "downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012018399s STEP: Saw pod success Apr 12 00:12:26.855: INFO: Pod "downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9" satisfied condition "Succeeded or Failed" Apr 12 00:12:26.858: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9 container client-container: STEP: delete the pod Apr 12 00:12:26.890: INFO: Waiting for pod downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9 to disappear Apr 12 00:12:26.922: INFO: Pod downwardapi-volume-4f282d20-8cd4-4499-89a4-a8d89ebdade9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:26.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3337" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2028,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:26.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 12 00:12:33.608: INFO: 0 pods remaining Apr 12 00:12:33.608: INFO: 0 pods has nil DeletionTimestamp Apr 12 00:12:33.608: INFO: Apr 12 00:12:34.844: INFO: 0 pods remaining Apr 12 00:12:34.845: INFO: 0 pods has nil DeletionTimestamp Apr 12 00:12:34.845: INFO: STEP: Gathering metrics W0412 00:12:35.417278 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 12 00:12:35.417: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9439" for this suite. • [SLOW TEST:8.494 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":127,"skipped":2030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:35.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-2ba1ef6f-9922-424c-a795-481edf435f0a STEP: Creating a pod to test consume configMaps Apr 12 00:12:36.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d" in namespace "projected-8131" to be "Succeeded or Failed" Apr 12 00:12:36.259: INFO: Pod "pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 184.254503ms Apr 12 00:12:38.262: INFO: Pod "pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18814286s Apr 12 00:12:40.266: INFO: Pod "pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192113112s STEP: Saw pod success Apr 12 00:12:40.266: INFO: Pod "pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d" satisfied condition "Succeeded or Failed" Apr 12 00:12:40.269: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:12:40.315: INFO: Waiting for pod pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d to disappear Apr 12 00:12:40.341: INFO: Pod pod-projected-configmaps-e4f6a29f-2bdc-4361-a8d8-531e86825a1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:40.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8131" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2059,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:40.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4907.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4907.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4907.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4907.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4907.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4907.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:12:46.473: INFO: DNS probes using dns-4907/dns-test-884d5fca-e810-4c54-9f0f-c5d9cbb2f25c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4907" for this suite. • [SLOW TEST:6.199 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":129,"skipped":2063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:46.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:12:46.607: INFO: Creating deployment "test-recreate-deployment" Apr 12 00:12:46.620: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 12 00:12:46.701: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 12 00:12:48.708: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 12 00:12:48.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247166, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247166, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247166, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247166, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:12:50.713: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 12 00:12:50.720: INFO: Updating deployment test-recreate-deployment Apr 12 00:12:50.720: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 12 00:12:51.158: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4127 /apis/apps/v1/namespaces/deployment-4127/deployments/test-recreate-deployment 1498b616-4b5d-4c3c-a140-1b5dced7b46a 7340070 2 2020-04-12 00:12:46 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b43c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-12 00:12:50 +0000 UTC,LastTransitionTime:2020-04-12 00:12:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-12 00:12:50 +0000 UTC,LastTransitionTime:2020-04-12 00:12:46 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 12 00:12:51.162: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4127 /apis/apps/v1/namespaces/deployment-4127/replicasets/test-recreate-deployment-5f94c574ff 789cca69-245f-4adf-9d77-1e02f013eff1 7340066 1 2020-04-12 00:12:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1498b616-4b5d-4c3c-a140-1b5dced7b46a 0xc0031d02a7 0xc0031d02a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031d0308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:12:51.162: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 12 00:12:51.162: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-4127 /apis/apps/v1/namespaces/deployment-4127/replicasets/test-recreate-deployment-846c7dd955 77d0a65a-d339-4ef6-a3af-278461309808 7340058 2 2020-04-12 00:12:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1498b616-4b5d-4c3c-a140-1b5dced7b46a 0xc0031d0377 0xc0031d0378}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031d03e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:12:51.166: INFO: Pod "test-recreate-deployment-5f94c574ff-9n9mc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-9n9mc test-recreate-deployment-5f94c574ff- deployment-4127 /api/v1/namespaces/deployment-4127/pods/test-recreate-deployment-5f94c574ff-9n9mc 315de4ca-40db-4c9d-944f-fa97201694c9 7340068 0 2020-04-12 00:12:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 789cca69-245f-4adf-9d77-1e02f013eff1 0xc0031d08a7 0xc0031d08a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9kjcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9kjcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9kjcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:12:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:51.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4127" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":130,"skipped":2131,"failed":0} SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:51.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:12:55.419: INFO: Waiting up to 5m0s for pod "client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360" in namespace "pods-5357" to be "Succeeded or Failed" Apr 12 00:12:55.433: INFO: Pod "client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360": Phase="Pending", Reason="", readiness=false. Elapsed: 13.887278ms Apr 12 00:12:57.437: INFO: Pod "client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01792565s Apr 12 00:12:59.441: INFO: Pod "client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02154836s STEP: Saw pod success Apr 12 00:12:59.441: INFO: Pod "client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360" satisfied condition "Succeeded or Failed" Apr 12 00:12:59.443: INFO: Trying to get logs from node latest-worker2 pod client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360 container env3cont: STEP: delete the pod Apr 12 00:12:59.480: INFO: Waiting for pod client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360 to disappear Apr 12 00:12:59.489: INFO: Pod client-envvars-56c26809-8848-44cc-b4f8-c489e1dcc360 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:12:59.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5357" for this suite. • [SLOW TEST:8.296 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2134,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:12:59.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:12:59.974: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:13:01.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247180, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:13:03.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247180, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247179, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:13:07.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:13:07.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4342-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:13:08.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3231" for this suite. STEP: Destroying namespace "webhook-3231-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.774 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":132,"skipped":2137,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:13:08.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-252 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 12 00:13:08.421: INFO: Found 0 stateful pods, waiting for 3 Apr 12 00:13:18.426: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:13:18.426: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:13:18.426: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:13:18.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-252 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:13:18.715: INFO: stderr: "I0412 00:13:18.587011 1703 log.go:172] (0xc00003a420) (0xc0005f0aa0) Create stream\nI0412 00:13:18.587087 1703 log.go:172] (0xc00003a420) (0xc0005f0aa0) Stream added, broadcasting: 1\nI0412 00:13:18.596546 1703 log.go:172] (0xc00003a420) Reply frame received for 1\nI0412 00:13:18.596589 1703 log.go:172] (0xc00003a420) (0xc00082b220) Create stream\nI0412 00:13:18.596598 1703 log.go:172] (0xc00003a420) (0xc00082b220) Stream added, broadcasting: 3\nI0412 00:13:18.598488 1703 log.go:172] (0xc00003a420) Reply frame received for 3\nI0412 00:13:18.598523 1703 log.go:172] (0xc00003a420) (0xc000980000) Create stream\nI0412 00:13:18.598541 1703 log.go:172] (0xc00003a420) (0xc000980000) Stream added, broadcasting: 5\nI0412 00:13:18.599128 1703 log.go:172] (0xc00003a420) Reply frame received for 5\nI0412 00:13:18.673421 1703 log.go:172] (0xc00003a420) Data frame received for 5\nI0412 00:13:18.673462 1703 log.go:172] (0xc000980000) (5) Data frame handling\nI0412 00:13:18.673487 1703 log.go:172] (0xc000980000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:13:18.707586 1703 log.go:172] (0xc00003a420) Data frame received for 3\nI0412 00:13:18.707633 1703 log.go:172] (0xc00082b220) (3) Data frame handling\nI0412 00:13:18.707659 1703 log.go:172] (0xc00082b220) (3) Data frame sent\nI0412 00:13:18.707679 1703 log.go:172] (0xc00003a420) Data frame received for 3\nI0412 00:13:18.707696 1703 log.go:172] (0xc00082b220) (3) Data frame handling\nI0412 00:13:18.708090 1703 log.go:172] (0xc00003a420) Data frame received for 5\nI0412 00:13:18.708115 1703 log.go:172] (0xc000980000) (5) Data frame handling\nI0412 00:13:18.710228 1703 log.go:172] (0xc00003a420) Data frame received for 1\nI0412 00:13:18.710260 1703 log.go:172] (0xc0005f0aa0) (1) Data frame handling\nI0412 00:13:18.710289 1703 log.go:172] (0xc0005f0aa0) (1) Data frame sent\nI0412 00:13:18.710309 1703 log.go:172] (0xc00003a420) (0xc0005f0aa0) Stream removed, broadcasting: 1\nI0412 00:13:18.710400 1703 log.go:172] (0xc00003a420) Go away received\nI0412 00:13:18.710768 1703 log.go:172] (0xc00003a420) (0xc0005f0aa0) Stream removed, broadcasting: 1\nI0412 00:13:18.710802 1703 log.go:172] (0xc00003a420) (0xc00082b220) Stream removed, broadcasting: 3\nI0412 00:13:18.710826 1703 log.go:172] (0xc00003a420) (0xc000980000) Stream removed, broadcasting: 5\n" Apr 12 00:13:18.715: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:13:18.715: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 12 00:13:28.747: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 12 00:13:38.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-252 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:13:39.028: INFO: stderr: "I0412 00:13:38.938065 1723 log.go:172] (0xc00090e000) (0xc0005995e0) Create stream\nI0412 00:13:38.938130 1723 log.go:172] (0xc00090e000) (0xc0005995e0) Stream added, broadcasting: 1\nI0412 00:13:38.940997 1723 log.go:172] (0xc00090e000) Reply frame received for 1\nI0412 00:13:38.941037 1723 log.go:172] (0xc00090e000) (0xc0004e8a00) Create stream\nI0412 00:13:38.941048 1723 log.go:172] (0xc00090e000) (0xc0004e8a00) Stream added, broadcasting: 3\nI0412 00:13:38.942282 1723 log.go:172] (0xc00090e000) Reply frame received for 3\nI0412 00:13:38.942303 1723 log.go:172] (0xc00090e000) (0xc0004e8aa0) Create stream\nI0412 00:13:38.942310 1723 log.go:172] (0xc00090e000) (0xc0004e8aa0) Stream added, broadcasting: 5\nI0412 00:13:38.943471 1723 log.go:172] (0xc00090e000) Reply frame received for 5\nI0412 00:13:39.020936 1723 log.go:172] (0xc00090e000) Data frame received for 5\nI0412 00:13:39.020957 1723 log.go:172] (0xc0004e8aa0) (5) Data frame handling\nI0412 00:13:39.020965 1723 log.go:172] (0xc0004e8aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:13:39.021023 1723 log.go:172] (0xc00090e000) Data frame received for 3\nI0412 00:13:39.021062 1723 log.go:172] (0xc0004e8a00) (3) Data frame handling\nI0412 00:13:39.021077 1723 log.go:172] (0xc0004e8a00) (3) Data frame sent\nI0412 00:13:39.021097 1723 log.go:172] (0xc00090e000) Data frame received for 3\nI0412 00:13:39.021288 1723 log.go:172] (0xc00090e000) Data frame received for 5\nI0412 00:13:39.021325 1723 log.go:172] (0xc0004e8aa0) (5) Data frame handling\nI0412 00:13:39.021376 1723 log.go:172] (0xc0004e8a00) (3) Data frame handling\nI0412 00:13:39.022884 1723 log.go:172] (0xc00090e000) Data frame received for 1\nI0412 00:13:39.022902 1723 log.go:172] (0xc0005995e0) (1) Data frame handling\nI0412 00:13:39.022911 1723 log.go:172] (0xc0005995e0) (1) Data frame sent\nI0412 00:13:39.023047 1723 log.go:172] (0xc00090e000) (0xc0005995e0) Stream removed, broadcasting: 1\nI0412 00:13:39.023095 1723 log.go:172] (0xc00090e000) Go away received\nI0412 00:13:39.023453 1723 log.go:172] (0xc00090e000) (0xc0005995e0) Stream removed, broadcasting: 1\nI0412 00:13:39.023476 1723 log.go:172] (0xc00090e000) (0xc0004e8a00) Stream removed, broadcasting: 3\nI0412 00:13:39.023491 1723 log.go:172] (0xc00090e000) (0xc0004e8aa0) Stream removed, broadcasting: 5\n" Apr 12 00:13:39.028: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:13:39.028: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:13:49.045: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update Apr 12 00:13:49.045: INFO: Waiting for Pod statefulset-252/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:13:49.045: INFO: Waiting for Pod statefulset-252/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:13:49.045: INFO: Waiting for Pod statefulset-252/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:13:59.052: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update Apr 12 00:13:59.052: INFO: Waiting for Pod statefulset-252/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:13:59.052: INFO: Waiting for Pod statefulset-252/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 12 00:14:09.052: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update Apr 12 00:14:09.052: INFO: Waiting for Pod statefulset-252/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 12 00:14:19.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-252 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:14:19.332: INFO: stderr: "I0412 00:14:19.191082 1745 log.go:172] (0xc00003ac60) (0xc000681540) Create stream\nI0412 00:14:19.191128 1745 log.go:172] (0xc00003ac60) (0xc000681540) Stream added, broadcasting: 1\nI0412 00:14:19.198186 1745 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0412 00:14:19.198243 1745 log.go:172] (0xc00003ac60) (0xc000ab4000) Create stream\nI0412 00:14:19.198266 1745 log.go:172] (0xc00003ac60) (0xc000ab4000) Stream added, broadcasting: 3\nI0412 00:14:19.199652 1745 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0412 00:14:19.199702 1745 log.go:172] (0xc00003ac60) (0xc000ab40a0) Create stream\nI0412 00:14:19.199726 1745 log.go:172] (0xc00003ac60) (0xc000ab40a0) Stream added, broadcasting: 5\nI0412 00:14:19.201582 1745 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0412 00:14:19.284893 1745 log.go:172] (0xc00003ac60) Data frame received for 5\nI0412 00:14:19.284923 1745 log.go:172] (0xc000ab40a0) (5) Data frame handling\nI0412 00:14:19.284944 1745 log.go:172] (0xc000ab40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:14:19.323624 1745 log.go:172] (0xc00003ac60) Data frame received for 3\nI0412 00:14:19.323671 1745 log.go:172] (0xc000ab4000) (3) Data frame handling\nI0412 00:14:19.323700 1745 log.go:172] (0xc000ab4000) (3) Data frame sent\nI0412 00:14:19.323787 1745 log.go:172] (0xc00003ac60) Data frame received for 5\nI0412 00:14:19.323832 1745 log.go:172] (0xc000ab40a0) (5) Data frame handling\nI0412 00:14:19.324055 1745 log.go:172] (0xc00003ac60) Data frame received for 3\nI0412 00:14:19.324093 1745 log.go:172] (0xc000ab4000) (3) Data frame handling\nI0412 00:14:19.326208 1745 log.go:172] (0xc00003ac60) Data frame received for 1\nI0412 00:14:19.326242 1745 log.go:172] (0xc000681540) (1) Data frame handling\nI0412 00:14:19.326274 1745 log.go:172] (0xc000681540) (1) Data frame sent\nI0412 00:14:19.326297 1745 log.go:172] (0xc00003ac60) (0xc000681540) Stream removed, broadcasting: 1\nI0412 00:14:19.326322 1745 log.go:172] (0xc00003ac60) Go away received\nI0412 00:14:19.326883 1745 log.go:172] (0xc00003ac60) (0xc000681540) Stream removed, broadcasting: 1\nI0412 00:14:19.326907 1745 log.go:172] (0xc00003ac60) (0xc000ab4000) Stream removed, broadcasting: 3\nI0412 00:14:19.326920 1745 log.go:172] (0xc00003ac60) (0xc000ab40a0) Stream removed, broadcasting: 5\n" Apr 12 00:14:19.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:14:19.332: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:14:29.365: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 12 00:14:39.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-252 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:14:39.606: INFO: stderr: "I0412 00:14:39.525708 1766 log.go:172] (0xc0009e2d10) (0xc0009d45a0) Create stream\nI0412 00:14:39.525768 1766 log.go:172] (0xc0009e2d10) (0xc0009d45a0) Stream added, broadcasting: 1\nI0412 00:14:39.530599 1766 log.go:172] (0xc0009e2d10) Reply frame received for 1\nI0412 00:14:39.530646 1766 log.go:172] (0xc0009e2d10) (0xc0006335e0) Create stream\nI0412 00:14:39.530660 1766 log.go:172] (0xc0009e2d10) (0xc0006335e0) Stream added, broadcasting: 3\nI0412 00:14:39.531538 1766 log.go:172] (0xc0009e2d10) Reply frame received for 3\nI0412 00:14:39.531571 1766 log.go:172] (0xc0009e2d10) (0xc000522a00) Create stream\nI0412 00:14:39.531582 1766 log.go:172] (0xc0009e2d10) (0xc000522a00) Stream added, broadcasting: 5\nI0412 00:14:39.532385 1766 log.go:172] (0xc0009e2d10) Reply frame received for 5\nI0412 00:14:39.599417 1766 log.go:172] (0xc0009e2d10) Data frame received for 3\nI0412 00:14:39.599484 1766 log.go:172] (0xc0009e2d10) Data frame received for 5\nI0412 00:14:39.599519 1766 log.go:172] (0xc000522a00) (5) Data frame handling\nI0412 00:14:39.599553 1766 log.go:172] (0xc000522a00) (5) Data frame sent\nI0412 00:14:39.599568 1766 log.go:172] (0xc0009e2d10) Data frame received for 5\nI0412 00:14:39.599589 1766 log.go:172] (0xc000522a00) (5) Data frame handling\nI0412 00:14:39.599606 1766 log.go:172] (0xc0006335e0) (3) Data frame handling\nI0412 00:14:39.599630 1766 log.go:172] (0xc0006335e0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:14:39.599684 1766 log.go:172] (0xc0009e2d10) Data frame received for 3\nI0412 00:14:39.599718 1766 log.go:172] (0xc0006335e0) (3) Data frame handling\nI0412 00:14:39.601584 1766 log.go:172] (0xc0009e2d10) Data frame received for 1\nI0412 00:14:39.601618 1766 log.go:172] (0xc0009d45a0) (1) Data frame handling\nI0412 00:14:39.601639 1766 log.go:172] (0xc0009d45a0) (1) Data frame sent\nI0412 00:14:39.601659 1766 log.go:172] (0xc0009e2d10) (0xc0009d45a0) Stream removed, broadcasting: 1\nI0412 00:14:39.601685 1766 log.go:172] (0xc0009e2d10) Go away received\nI0412 00:14:39.602124 1766 log.go:172] (0xc0009e2d10) (0xc0009d45a0) Stream removed, broadcasting: 1\nI0412 00:14:39.602148 1766 log.go:172] (0xc0009e2d10) (0xc0006335e0) Stream removed, broadcasting: 3\nI0412 00:14:39.602169 1766 log.go:172] (0xc0009e2d10) (0xc000522a00) Stream removed, broadcasting: 5\n" Apr 12 00:14:39.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:14:39.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:14:49.625: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update Apr 12 00:14:49.626: INFO: Waiting for Pod statefulset-252/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 12 00:14:49.626: INFO: Waiting for Pod statefulset-252/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 12 00:14:49.626: INFO: Waiting for Pod statefulset-252/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 12 00:14:59.633: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update Apr 12 00:14:59.633: INFO: Waiting for Pod statefulset-252/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 12 00:14:59.633: INFO: Waiting for Pod statefulset-252/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 12 00:15:09.633: INFO: Waiting for StatefulSet statefulset-252/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 12 00:15:19.634: INFO: Deleting all statefulset in ns statefulset-252 Apr 12 00:15:19.637: INFO: Scaling statefulset ss2 to 0 Apr 12 00:15:39.651: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:15:39.654: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:15:39.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-252" for this suite. • [SLOW TEST:151.418 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":133,"skipped":2140,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:15:39.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-kjrc STEP: Creating a pod to test atomic-volume-subpath Apr 12 00:15:39.819: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kjrc" in namespace "subpath-664" to be "Succeeded or Failed" Apr 12 00:15:39.827: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.671836ms Apr 12 00:15:41.831: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011935019s Apr 12 00:15:43.835: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 4.016134846s Apr 12 00:15:45.839: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 6.020067916s Apr 12 00:15:47.851: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 8.031433844s Apr 12 00:15:49.855: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 10.035322244s Apr 12 00:15:51.859: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 12.039485687s Apr 12 00:15:53.862: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 14.04293297s Apr 12 00:15:55.866: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 16.046740452s Apr 12 00:15:57.870: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 18.050999044s Apr 12 00:15:59.874: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 20.054483237s Apr 12 00:16:01.878: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Running", Reason="", readiness=true. Elapsed: 22.058907395s Apr 12 00:16:03.882: INFO: Pod "pod-subpath-test-configmap-kjrc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063187061s STEP: Saw pod success Apr 12 00:16:03.882: INFO: Pod "pod-subpath-test-configmap-kjrc" satisfied condition "Succeeded or Failed" Apr 12 00:16:03.886: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-kjrc container test-container-subpath-configmap-kjrc: STEP: delete the pod Apr 12 00:16:03.949: INFO: Waiting for pod pod-subpath-test-configmap-kjrc to disappear Apr 12 00:16:03.979: INFO: Pod pod-subpath-test-configmap-kjrc no longer exists STEP: Deleting pod pod-subpath-test-configmap-kjrc Apr 12 00:16:03.979: INFO: Deleting pod "pod-subpath-test-configmap-kjrc" in namespace "subpath-664" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:03.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-664" for this suite. • [SLOW TEST:24.297 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":134,"skipped":2142,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:03.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:16:05.003: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:16:07.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247365, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247365, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247365, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247364, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:16:10.037: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:10.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3134" for this suite. STEP: Destroying namespace "webhook-3134-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.620 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":135,"skipped":2150,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:10.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 12 00:16:10.703: INFO: Waiting up to 5m0s for pod "downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15" in namespace "downward-api-950" to be "Succeeded or Failed" Apr 12 00:16:10.707: INFO: Pod "downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662354ms Apr 12 00:16:12.713: INFO: Pod "downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009864225s Apr 12 00:16:14.734: INFO: Pod "downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030887098s STEP: Saw pod success Apr 12 00:16:14.734: INFO: Pod "downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15" satisfied condition "Succeeded or Failed" Apr 12 00:16:14.750: INFO: Trying to get logs from node latest-worker pod downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15 container dapi-container: STEP: delete the pod Apr 12 00:16:14.804: INFO: Waiting for pod downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15 to disappear Apr 12 00:16:14.809: INFO: Pod downward-api-205595e2-92db-45f8-8c71-4a52bfde7f15 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:14.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-950" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:14.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 12 00:16:19.541: INFO: Successfully updated pod "annotationupdatedaf86bb4-0d42-474b-b361-bc9c522db073" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:21.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7107" for this suite. • [SLOW TEST:6.765 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2197,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:21.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 12 00:16:21.627: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:26.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9075" for this suite. • [SLOW TEST:5.421 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":138,"skipped":2205,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:27.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-b1ff587b-de96-4163-9a6f-8e412122d341 STEP: Creating a pod to test consume secrets Apr 12 00:16:27.340: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e" in namespace "projected-4675" to be "Succeeded or Failed" Apr 12 00:16:27.361: INFO: Pod "pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.37095ms Apr 12 00:16:29.364: INFO: Pod "pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023685741s Apr 12 00:16:31.368: INFO: Pod "pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027106421s STEP: Saw pod success Apr 12 00:16:31.368: INFO: Pod "pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e" satisfied condition "Succeeded or Failed" Apr 12 00:16:31.371: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e container projected-secret-volume-test: STEP: delete the pod Apr 12 00:16:31.392: INFO: Waiting for pod pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e to disappear Apr 12 00:16:31.396: INFO: Pod pod-projected-secrets-af85b477-a28e-4608-866b-f676fa4e9e9e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:31.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4675" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2212,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:31.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:16:31.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423" in namespace "downward-api-3709" to be "Succeeded or Failed" Apr 12 00:16:31.498: INFO: Pod "downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423": Phase="Pending", Reason="", readiness=false. Elapsed: 14.040724ms Apr 12 00:16:33.502: INFO: Pod "downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018140535s Apr 12 00:16:35.506: INFO: Pod "downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022536851s STEP: Saw pod success Apr 12 00:16:35.506: INFO: Pod "downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423" satisfied condition "Succeeded or Failed" Apr 12 00:16:35.510: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423 container client-container: STEP: delete the pod Apr 12 00:16:35.530: INFO: Waiting for pod downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423 to disappear Apr 12 00:16:35.534: INFO: Pod downwardapi-volume-b25f47fe-936b-453c-a154-7dda44f41423 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:35.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3709" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2222,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:35.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-cb6c3821-91dd-49d4-8b0e-8db5e14f04ec STEP: Creating a pod to test consume configMaps Apr 12 00:16:35.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b" in namespace "configmap-3502" to be "Succeeded or Failed" Apr 12 00:16:35.661: INFO: Pod "pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.42019ms Apr 12 00:16:37.665: INFO: Pod "pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020395525s Apr 12 00:16:39.669: INFO: Pod "pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02479123s STEP: Saw pod success Apr 12 00:16:39.669: INFO: Pod "pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b" satisfied condition "Succeeded or Failed" Apr 12 00:16:39.672: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b container configmap-volume-test: STEP: delete the pod Apr 12 00:16:39.699: INFO: Waiting for pod pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b to disappear Apr 12 00:16:39.702: INFO: Pod pod-configmaps-f4e13cc5-3194-48c1-81a9-c022cbe6bb6b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3502" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2241,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:39.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:16:40.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:16:42.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247400, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247400, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247400, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247400, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:16:45.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 12 00:16:49.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-9109 to-be-attached-pod -i -c=container1' Apr 12 00:16:49.337: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9109" for this suite. STEP: Destroying namespace "webhook-9109-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.767 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":142,"skipped":2251,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:49.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 12 00:16:49.570: INFO: namespace kubectl-7080 Apr 12 00:16:49.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7080' Apr 12 00:16:50.107: INFO: stderr: "" Apr 12 00:16:50.107: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 12 00:16:51.112: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:16:51.112: INFO: Found 0 / 1 Apr 12 00:16:52.111: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:16:52.111: INFO: Found 0 / 1 Apr 12 00:16:53.111: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:16:53.111: INFO: Found 0 / 1 Apr 12 00:16:54.111: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:16:54.111: INFO: Found 1 / 1 Apr 12 00:16:54.111: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 12 00:16:54.113: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:16:54.113: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 12 00:16:54.113: INFO: wait on agnhost-master startup in kubectl-7080 Apr 12 00:16:54.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-gv74x agnhost-master --namespace=kubectl-7080' Apr 12 00:16:54.221: INFO: stderr: "" Apr 12 00:16:54.221: INFO: stdout: "Paused\n" STEP: exposing RC Apr 12 00:16:54.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7080' Apr 12 00:16:54.380: INFO: stderr: "" Apr 12 00:16:54.380: INFO: stdout: "service/rm2 exposed\n" Apr 12 00:16:54.410: INFO: Service rm2 in namespace kubectl-7080 found. STEP: exposing service Apr 12 00:16:56.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7080' Apr 12 00:16:56.529: INFO: stderr: "" Apr 12 00:16:56.530: INFO: stdout: "service/rm3 exposed\n" Apr 12 00:16:56.536: INFO: Service rm3 in namespace kubectl-7080 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:16:58.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7080" for this suite. • [SLOW TEST:9.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":143,"skipped":2252,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:16:58.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 12 00:16:58.609: INFO: Waiting up to 5m0s for pod "downward-api-c9a41eff-2f36-45e1-8186-76b407760c62" in namespace "downward-api-3039" to be "Succeeded or Failed" Apr 12 00:16:58.644: INFO: Pod "downward-api-c9a41eff-2f36-45e1-8186-76b407760c62": Phase="Pending", Reason="", readiness=false. Elapsed: 35.090376ms Apr 12 00:17:00.648: INFO: Pod "downward-api-c9a41eff-2f36-45e1-8186-76b407760c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038593457s Apr 12 00:17:02.652: INFO: Pod "downward-api-c9a41eff-2f36-45e1-8186-76b407760c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042845454s STEP: Saw pod success Apr 12 00:17:02.652: INFO: Pod "downward-api-c9a41eff-2f36-45e1-8186-76b407760c62" satisfied condition "Succeeded or Failed" Apr 12 00:17:02.656: INFO: Trying to get logs from node latest-worker pod downward-api-c9a41eff-2f36-45e1-8186-76b407760c62 container dapi-container: STEP: delete the pod Apr 12 00:17:02.675: INFO: Waiting for pod downward-api-c9a41eff-2f36-45e1-8186-76b407760c62 to disappear Apr 12 00:17:02.691: INFO: Pod downward-api-c9a41eff-2f36-45e1-8186-76b407760c62 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:02.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3039" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:02.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:17:02.796: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:03.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5151" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":145,"skipped":2352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:03.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-7498 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7498 to expose endpoints map[] Apr 12 00:17:04.088: INFO: successfully validated that service multi-endpoint-test in namespace services-7498 exposes endpoints map[] (52.069953ms elapsed) STEP: Creating pod pod1 in namespace services-7498 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7498 to expose endpoints map[pod1:[100]] Apr 12 00:17:08.190: INFO: successfully validated that service multi-endpoint-test in namespace services-7498 exposes endpoints map[pod1:[100]] (4.083021227s elapsed) STEP: Creating pod pod2 in namespace services-7498 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7498 to expose endpoints map[pod1:[100] pod2:[101]] Apr 12 00:17:11.283: INFO: successfully validated that service multi-endpoint-test in namespace services-7498 exposes endpoints map[pod1:[100] pod2:[101]] (3.088508651s elapsed) STEP: Deleting pod pod1 in namespace services-7498 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7498 to expose endpoints map[pod2:[101]] Apr 12 00:17:12.327: INFO: successfully validated that service multi-endpoint-test in namespace services-7498 exposes endpoints map[pod2:[101]] (1.03933482s elapsed) STEP: Deleting pod pod2 in namespace services-7498 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7498 to expose endpoints map[] Apr 12 00:17:12.354: INFO: successfully validated that service multi-endpoint-test in namespace services-7498 exposes endpoints map[] (22.415786ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:12.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7498" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.587 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":146,"skipped":2364,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:12.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2510 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2510 STEP: creating replication controller externalsvc in namespace services-2510 I0412 00:17:12.657409 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2510, replica count: 2 I0412 00:17:15.707864 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:17:18.708112 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 12 00:17:18.765: INFO: Creating new exec pod Apr 12 00:17:22.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2510 execpodrfkj5 -- /bin/sh -x -c nslookup nodeport-service' Apr 12 00:17:25.490: INFO: stderr: "I0412 00:17:25.384603 1895 log.go:172] (0xc000bd6580) (0xc000bca280) Create stream\nI0412 00:17:25.384648 1895 log.go:172] (0xc000bd6580) (0xc000bca280) Stream added, broadcasting: 1\nI0412 00:17:25.387965 1895 log.go:172] (0xc000bd6580) Reply frame received for 1\nI0412 00:17:25.388007 1895 log.go:172] (0xc000bd6580) (0xc000502b40) Create stream\nI0412 00:17:25.388022 1895 log.go:172] (0xc000bd6580) (0xc000502b40) Stream added, broadcasting: 3\nI0412 00:17:25.389058 1895 log.go:172] (0xc000bd6580) Reply frame received for 3\nI0412 00:17:25.389102 1895 log.go:172] (0xc000bd6580) (0xc000bbc140) Create stream\nI0412 00:17:25.389247 1895 log.go:172] (0xc000bd6580) (0xc000bbc140) Stream added, broadcasting: 5\nI0412 00:17:25.390255 1895 log.go:172] (0xc000bd6580) Reply frame received for 5\nI0412 00:17:25.470888 1895 log.go:172] (0xc000bd6580) Data frame received for 5\nI0412 00:17:25.470918 1895 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0412 00:17:25.470938 1895 log.go:172] (0xc000bbc140) (5) Data frame sent\n+ nslookup nodeport-service\nI0412 00:17:25.477596 1895 log.go:172] (0xc000bd6580) Data frame received for 3\nI0412 00:17:25.477628 1895 log.go:172] (0xc000502b40) (3) Data frame handling\nI0412 00:17:25.477648 1895 log.go:172] (0xc000502b40) (3) Data frame sent\nI0412 00:17:25.478758 1895 log.go:172] (0xc000bd6580) Data frame received for 3\nI0412 00:17:25.478780 1895 log.go:172] (0xc000502b40) (3) Data frame handling\nI0412 00:17:25.478817 1895 log.go:172] (0xc000502b40) (3) Data frame sent\nI0412 00:17:25.479201 1895 log.go:172] (0xc000bd6580) Data frame received for 3\nI0412 00:17:25.479233 1895 log.go:172] (0xc000502b40) (3) Data frame handling\nI0412 00:17:25.479260 1895 log.go:172] (0xc000bd6580) Data frame received for 5\nI0412 00:17:25.479286 1895 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0412 00:17:25.483383 1895 log.go:172] (0xc000bd6580) Data frame received for 1\nI0412 00:17:25.483433 1895 log.go:172] (0xc000bca280) (1) Data frame handling\nI0412 00:17:25.483458 1895 log.go:172] (0xc000bca280) (1) Data frame sent\nI0412 00:17:25.483482 1895 log.go:172] (0xc000bd6580) (0xc000bca280) Stream removed, broadcasting: 1\nI0412 00:17:25.483639 1895 log.go:172] (0xc000bd6580) Go away received\nI0412 00:17:25.483884 1895 log.go:172] (0xc000bd6580) (0xc000bca280) Stream removed, broadcasting: 1\nI0412 00:17:25.483910 1895 log.go:172] (0xc000bd6580) (0xc000502b40) Stream removed, broadcasting: 3\nI0412 00:17:25.483930 1895 log.go:172] (0xc000bd6580) (0xc000bbc140) Stream removed, broadcasting: 5\n" Apr 12 00:17:25.490: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2510.svc.cluster.local\tcanonical name = externalsvc.services-2510.svc.cluster.local.\nName:\texternalsvc.services-2510.svc.cluster.local\nAddress: 10.96.143.177\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2510, will wait for the garbage collector to delete the pods Apr 12 00:17:25.551: INFO: Deleting ReplicationController externalsvc took: 7.107029ms Apr 12 00:17:25.651: INFO: Terminating ReplicationController externalsvc pods took: 100.206984ms Apr 12 00:17:33.075: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:33.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.661 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":147,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:33.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 12 00:17:33.178: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix037787075/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:33.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9083" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":148,"skipped":2388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:33.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:39.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8604" for this suite. STEP: Destroying namespace "nsdeletetest-8751" for this suite. Apr 12 00:17:39.519: INFO: Namespace nsdeletetest-8751 was already deleted STEP: Destroying namespace "nsdeletetest-6285" for this suite. • [SLOW TEST:6.277 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":149,"skipped":2439,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:39.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:51.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1652" for this suite. • [SLOW TEST:12.093 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":150,"skipped":2452,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:51.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 12 00:17:51.723: INFO: Waiting up to 5m0s for pod "pod-023dff38-c044-4a17-b1a9-9787bab26c3d" in namespace "emptydir-8030" to be "Succeeded or Failed" Apr 12 00:17:51.746: INFO: Pod "pod-023dff38-c044-4a17-b1a9-9787bab26c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.335025ms Apr 12 00:17:53.830: INFO: Pod "pod-023dff38-c044-4a17-b1a9-9787bab26c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107734384s Apr 12 00:17:55.834: INFO: Pod "pod-023dff38-c044-4a17-b1a9-9787bab26c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111141289s STEP: Saw pod success Apr 12 00:17:55.834: INFO: Pod "pod-023dff38-c044-4a17-b1a9-9787bab26c3d" satisfied condition "Succeeded or Failed" Apr 12 00:17:55.836: INFO: Trying to get logs from node latest-worker2 pod pod-023dff38-c044-4a17-b1a9-9787bab26c3d container test-container: STEP: delete the pod Apr 12 00:17:55.873: INFO: Waiting for pod pod-023dff38-c044-4a17-b1a9-9787bab26c3d to disappear Apr 12 00:17:55.878: INFO: Pod pod-023dff38-c044-4a17-b1a9-9787bab26c3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:17:55.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8030" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2453,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:17:55.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3451 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 12 00:17:55.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 12 00:17:56.012: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:17:58.017: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:18:00.016: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:02.015: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:04.016: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:06.017: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:08.016: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:10.029: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:18:12.015: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 12 00:18:12.022: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 12 00:18:14.026: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 12 00:18:16.029: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 12 00:18:22.074: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.82 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3451 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:18:22.075: INFO: >>> kubeConfig: /root/.kube/config I0412 00:18:22.108806 7 log.go:172] (0xc00274e370) (0xc00223b360) Create stream I0412 00:18:22.108840 7 log.go:172] (0xc00274e370) (0xc00223b360) Stream added, broadcasting: 1 I0412 00:18:22.111042 7 log.go:172] (0xc00274e370) Reply frame received for 1 I0412 00:18:22.111067 7 log.go:172] (0xc00274e370) (0xc00223b4a0) Create stream I0412 00:18:22.111078 7 log.go:172] (0xc00274e370) (0xc00223b4a0) Stream added, broadcasting: 3 I0412 00:18:22.111848 7 log.go:172] (0xc00274e370) Reply frame received for 3 I0412 00:18:22.111893 7 log.go:172] (0xc00274e370) (0xc00290a000) Create stream I0412 00:18:22.111912 7 log.go:172] (0xc00274e370) (0xc00290a000) Stream added, broadcasting: 5 I0412 00:18:22.112757 7 log.go:172] (0xc00274e370) Reply frame received for 5 I0412 00:18:23.206399 7 log.go:172] (0xc00274e370) Data frame received for 3 I0412 00:18:23.206443 7 log.go:172] (0xc00223b4a0) (3) Data frame handling I0412 00:18:23.206479 7 log.go:172] (0xc00223b4a0) (3) Data frame sent I0412 00:18:23.206503 7 log.go:172] (0xc00274e370) Data frame received for 3 I0412 00:18:23.206525 7 log.go:172] (0xc00223b4a0) (3) Data frame handling I0412 00:18:23.206651 7 log.go:172] (0xc00274e370) Data frame received for 5 I0412 00:18:23.206695 7 log.go:172] (0xc00290a000) (5) Data frame handling I0412 00:18:23.208674 7 log.go:172] (0xc00274e370) Data frame received for 1 I0412 00:18:23.208706 7 log.go:172] (0xc00223b360) (1) Data frame handling I0412 00:18:23.208738 7 log.go:172] (0xc00223b360) (1) Data frame sent I0412 00:18:23.208891 7 log.go:172] (0xc00274e370) (0xc00223b360) Stream removed, broadcasting: 1 I0412 00:18:23.208937 7 log.go:172] (0xc00274e370) Go away received I0412 00:18:23.209015 7 log.go:172] (0xc00274e370) (0xc00223b360) Stream removed, broadcasting: 1 I0412 00:18:23.209043 7 log.go:172] (0xc00274e370) (0xc00223b4a0) Stream removed, broadcasting: 3 I0412 00:18:23.209106 7 log.go:172] (0xc00274e370) (0xc00290a000) Stream removed, broadcasting: 5 Apr 12 00:18:23.209: INFO: Found all expected endpoints: [netserver-0] Apr 12 00:18:23.212: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.106 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3451 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:18:23.213: INFO: >>> kubeConfig: /root/.kube/config I0412 00:18:23.244698 7 log.go:172] (0xc00274e9a0) (0xc001326140) Create stream I0412 00:18:23.244733 7 log.go:172] (0xc00274e9a0) (0xc001326140) Stream added, broadcasting: 1 I0412 00:18:23.247112 7 log.go:172] (0xc00274e9a0) Reply frame received for 1 I0412 00:18:23.247156 7 log.go:172] (0xc00274e9a0) (0xc0013263c0) Create stream I0412 00:18:23.247171 7 log.go:172] (0xc00274e9a0) (0xc0013263c0) Stream added, broadcasting: 3 I0412 00:18:23.248131 7 log.go:172] (0xc00274e9a0) Reply frame received for 3 I0412 00:18:23.248175 7 log.go:172] (0xc00274e9a0) (0xc001ae20a0) Create stream I0412 00:18:23.248190 7 log.go:172] (0xc00274e9a0) (0xc001ae20a0) Stream added, broadcasting: 5 I0412 00:18:23.249232 7 log.go:172] (0xc00274e9a0) Reply frame received for 5 I0412 00:18:24.342278 7 log.go:172] (0xc00274e9a0) Data frame received for 5 I0412 00:18:24.342320 7 log.go:172] (0xc001ae20a0) (5) Data frame handling I0412 00:18:24.342351 7 log.go:172] (0xc00274e9a0) Data frame received for 3 I0412 00:18:24.342387 7 log.go:172] (0xc0013263c0) (3) Data frame handling I0412 00:18:24.342407 7 log.go:172] (0xc0013263c0) (3) Data frame sent I0412 00:18:24.342415 7 log.go:172] (0xc00274e9a0) Data frame received for 3 I0412 00:18:24.342423 7 log.go:172] (0xc0013263c0) (3) Data frame handling I0412 00:18:24.344238 7 log.go:172] (0xc00274e9a0) Data frame received for 1 I0412 00:18:24.344249 7 log.go:172] (0xc001326140) (1) Data frame handling I0412 00:18:24.344260 7 log.go:172] (0xc001326140) (1) Data frame sent I0412 00:18:24.344272 7 log.go:172] (0xc00274e9a0) (0xc001326140) Stream removed, broadcasting: 1 I0412 00:18:24.344292 7 log.go:172] (0xc00274e9a0) Go away received I0412 00:18:24.344523 7 log.go:172] (0xc00274e9a0) (0xc001326140) Stream removed, broadcasting: 1 I0412 00:18:24.344552 7 log.go:172] (0xc00274e9a0) (0xc0013263c0) Stream removed, broadcasting: 3 I0412 00:18:24.344569 7 log.go:172] (0xc00274e9a0) (0xc001ae20a0) Stream removed, broadcasting: 5 Apr 12 00:18:24.344: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:18:24.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3451" for this suite. • [SLOW TEST:28.466 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2463,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:18:24.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 12 00:18:24.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342356 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:18:24.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342356 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 12 00:18:34.405: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342414 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:18:34.405: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342414 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 12 00:18:44.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342440 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:18:44.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342440 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 12 00:18:54.421: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342470 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:18:54.421: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a e66763a1-5c25-4378-aef0-26972e30111e 7342470 0 2020-04-12 00:18:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 12 00:19:04.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b 2e7f19d7-d0ed-454d-a3f6-61203fe89bd4 7342500 0 2020-04-12 00:19:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:19:04.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b 2e7f19d7-d0ed-454d-a3f6-61203fe89bd4 7342500 0 2020-04-12 00:19:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 12 00:19:14.436: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b 2e7f19d7-d0ed-454d-a3f6-61203fe89bd4 7342530 0 2020-04-12 00:19:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:19:14.437: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b 2e7f19d7-d0ed-454d-a3f6-61203fe89bd4 7342530 0 2020-04-12 00:19:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:24.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7150" for this suite. • [SLOW TEST:60.094 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":153,"skipped":2473,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:24.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 12 00:19:24.522: INFO: Waiting up to 5m0s for pod "pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8" in namespace "emptydir-4283" to be "Succeeded or Failed" Apr 12 00:19:24.538: INFO: Pod "pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.837395ms Apr 12 00:19:26.542: INFO: Pod "pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020538895s Apr 12 00:19:28.546: INFO: Pod "pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024781539s STEP: Saw pod success Apr 12 00:19:28.546: INFO: Pod "pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8" satisfied condition "Succeeded or Failed" Apr 12 00:19:28.550: INFO: Trying to get logs from node latest-worker2 pod pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8 container test-container: STEP: delete the pod Apr 12 00:19:28.678: INFO: Waiting for pod pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8 to disappear Apr 12 00:19:28.682: INFO: Pod pod-fd88ec04-004a-4447-86e6-d2fe9b9bafe8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:28.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4283" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2480,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:28.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:19:29.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:19:31.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247569, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247569, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247569, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247569, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:19:34.318: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:19:34.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:35.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4586" for this suite. STEP: Destroying namespace "webhook-4586-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.869 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":155,"skipped":2494,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:35.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:19:35.669: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-85181779-2c98-4674-842e-faea4dd3e30e" in namespace "security-context-test-8361" to be "Succeeded or Failed" Apr 12 00:19:35.685: INFO: Pod "busybox-readonly-false-85181779-2c98-4674-842e-faea4dd3e30e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.699651ms Apr 12 00:19:37.725: INFO: Pod "busybox-readonly-false-85181779-2c98-4674-842e-faea4dd3e30e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055508502s Apr 12 00:19:39.730: INFO: Pod "busybox-readonly-false-85181779-2c98-4674-842e-faea4dd3e30e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060072207s Apr 12 00:19:39.730: INFO: Pod "busybox-readonly-false-85181779-2c98-4674-842e-faea4dd3e30e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:39.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8361" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2497,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:39.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 12 00:19:43.836: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:43.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8778" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2514,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:43.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-88f06804-b3c7-4872-8115-4641d3ec2e1a STEP: Creating a pod to test consume secrets Apr 12 00:19:44.038: INFO: Waiting up to 5m0s for pod "pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf" in namespace "secrets-5301" to be "Succeeded or Failed" Apr 12 00:19:44.048: INFO: Pod "pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86375ms Apr 12 00:19:46.053: INFO: Pod "pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015280129s Apr 12 00:19:48.057: INFO: Pod "pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019651284s STEP: Saw pod success Apr 12 00:19:48.057: INFO: Pod "pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf" satisfied condition "Succeeded or Failed" Apr 12 00:19:48.060: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf container secret-volume-test: STEP: delete the pod Apr 12 00:19:48.080: INFO: Waiting for pod pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf to disappear Apr 12 00:19:48.084: INFO: Pod pod-secrets-1a4ad9a6-b936-406a-9091-4f3e2b743abf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:48.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5301" for this suite. STEP: Destroying namespace "secret-namespace-5037" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:48.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-823e2ac1-22bd-43bd-ac47-3f944fc87ec8 STEP: Creating a pod to test consume configMaps Apr 12 00:19:48.198: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e" in namespace "projected-7646" to be "Succeeded or Failed" Apr 12 00:19:48.201: INFO: Pod "pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244901ms Apr 12 00:19:50.205: INFO: Pod "pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006874748s Apr 12 00:19:52.208: INFO: Pod "pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010433453s STEP: Saw pod success Apr 12 00:19:52.208: INFO: Pod "pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e" satisfied condition "Succeeded or Failed" Apr 12 00:19:52.211: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:19:52.242: INFO: Waiting for pod pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e to disappear Apr 12 00:19:52.299: INFO: Pod pod-projected-configmaps-6f4b04a3-430e-443c-b358-d4a5c42e2f2e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:52.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7646" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2563,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:52.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 12 00:19:52.382: INFO: Waiting up to 5m0s for pod "client-containers-b830e340-a733-41fe-8b90-d49527d8c943" in namespace "containers-2557" to be "Succeeded or Failed" Apr 12 00:19:52.384: INFO: Pod "client-containers-b830e340-a733-41fe-8b90-d49527d8c943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391768ms Apr 12 00:19:54.401: INFO: Pod "client-containers-b830e340-a733-41fe-8b90-d49527d8c943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01881637s Apr 12 00:19:56.405: INFO: Pod "client-containers-b830e340-a733-41fe-8b90-d49527d8c943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02320673s STEP: Saw pod success Apr 12 00:19:56.405: INFO: Pod "client-containers-b830e340-a733-41fe-8b90-d49527d8c943" satisfied condition "Succeeded or Failed" Apr 12 00:19:56.408: INFO: Trying to get logs from node latest-worker2 pod client-containers-b830e340-a733-41fe-8b90-d49527d8c943 container test-container: STEP: delete the pod Apr 12 00:19:56.434: INFO: Waiting for pod client-containers-b830e340-a733-41fe-8b90-d49527d8c943 to disappear Apr 12 00:19:56.468: INFO: Pod client-containers-b830e340-a733-41fe-8b90-d49527d8c943 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:19:56.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2557" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2568,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:19:56.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0412 00:20:07.605894 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 12 00:20:07.605: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:07.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-806" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":161,"skipped":2569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:07.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:07.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6084" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2606,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:07.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6114 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 12 00:20:07.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 12 00:20:07.948: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:20:09.952: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:20:11.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:14.002: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:15.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:17.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:19.955: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:21.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:23.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:25.952: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 12 00:20:25.957: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 12 00:20:27.961: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 12 00:20:31.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.91:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6114 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:20:31.995: INFO: >>> kubeConfig: /root/.kube/config I0412 00:20:32.035626 7 log.go:172] (0xc002db2630) (0xc00223b900) Create stream I0412 00:20:32.035663 7 log.go:172] (0xc002db2630) (0xc00223b900) Stream added, broadcasting: 1 I0412 00:20:32.037854 7 log.go:172] (0xc002db2630) Reply frame received for 1 I0412 00:20:32.037889 7 log.go:172] (0xc002db2630) (0xc001146b40) Create stream I0412 00:20:32.037898 7 log.go:172] (0xc002db2630) (0xc001146b40) Stream added, broadcasting: 3 I0412 00:20:32.038952 7 log.go:172] (0xc002db2630) Reply frame received for 3 I0412 00:20:32.038997 7 log.go:172] (0xc002db2630) (0xc00223bb80) Create stream I0412 00:20:32.039011 7 log.go:172] (0xc002db2630) (0xc00223bb80) Stream added, broadcasting: 5 I0412 00:20:32.040156 7 log.go:172] (0xc002db2630) Reply frame received for 5 I0412 00:20:32.135067 7 log.go:172] (0xc002db2630) Data frame received for 3 I0412 00:20:32.135107 7 log.go:172] (0xc001146b40) (3) Data frame handling I0412 00:20:32.135120 7 log.go:172] (0xc001146b40) (3) Data frame sent I0412 00:20:32.135125 7 log.go:172] (0xc002db2630) Data frame received for 3 I0412 00:20:32.135129 7 log.go:172] (0xc001146b40) (3) Data frame handling I0412 00:20:32.135242 7 log.go:172] (0xc002db2630) Data frame received for 5 I0412 00:20:32.135256 7 log.go:172] (0xc00223bb80) (5) Data frame handling I0412 00:20:32.137962 7 log.go:172] (0xc002db2630) Data frame received for 1 I0412 00:20:32.137995 7 log.go:172] (0xc00223b900) (1) Data frame handling I0412 00:20:32.138022 7 log.go:172] (0xc00223b900) (1) Data frame sent I0412 00:20:32.138052 7 log.go:172] (0xc002db2630) (0xc00223b900) Stream removed, broadcasting: 1 I0412 00:20:32.138179 7 log.go:172] (0xc002db2630) (0xc00223b900) Stream removed, broadcasting: 1 I0412 00:20:32.138213 7 log.go:172] (0xc002db2630) (0xc001146b40) Stream removed, broadcasting: 3 I0412 00:20:32.138230 7 log.go:172] (0xc002db2630) (0xc00223bb80) Stream removed, broadcasting: 5 Apr 12 00:20:32.138: INFO: Found all expected endpoints: [netserver-0] I0412 00:20:32.138544 7 log.go:172] (0xc002db2630) Go away received Apr 12 00:20:32.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.117:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6114 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:20:32.142: INFO: >>> kubeConfig: /root/.kube/config I0412 00:20:32.176067 7 log.go:172] (0xc002db2d10) (0xc00177e0a0) Create stream I0412 00:20:32.176094 7 log.go:172] (0xc002db2d10) (0xc00177e0a0) Stream added, broadcasting: 1 I0412 00:20:32.178731 7 log.go:172] (0xc002db2d10) Reply frame received for 1 I0412 00:20:32.178794 7 log.go:172] (0xc002db2d10) (0xc001147040) Create stream I0412 00:20:32.178825 7 log.go:172] (0xc002db2d10) (0xc001147040) Stream added, broadcasting: 3 I0412 00:20:32.179837 7 log.go:172] (0xc002db2d10) Reply frame received for 3 I0412 00:20:32.179866 7 log.go:172] (0xc002db2d10) (0xc001b99c20) Create stream I0412 00:20:32.179877 7 log.go:172] (0xc002db2d10) (0xc001b99c20) Stream added, broadcasting: 5 I0412 00:20:32.180729 7 log.go:172] (0xc002db2d10) Reply frame received for 5 I0412 00:20:32.247625 7 log.go:172] (0xc002db2d10) Data frame received for 5 I0412 00:20:32.247661 7 log.go:172] (0xc001b99c20) (5) Data frame handling I0412 00:20:32.247709 7 log.go:172] (0xc002db2d10) Data frame received for 3 I0412 00:20:32.247753 7 log.go:172] (0xc001147040) (3) Data frame handling I0412 00:20:32.247780 7 log.go:172] (0xc001147040) (3) Data frame sent I0412 00:20:32.247795 7 log.go:172] (0xc002db2d10) Data frame received for 3 I0412 00:20:32.247804 7 log.go:172] (0xc001147040) (3) Data frame handling I0412 00:20:32.249565 7 log.go:172] (0xc002db2d10) Data frame received for 1 I0412 00:20:32.249602 7 log.go:172] (0xc00177e0a0) (1) Data frame handling I0412 00:20:32.249626 7 log.go:172] (0xc00177e0a0) (1) Data frame sent I0412 00:20:32.249657 7 log.go:172] (0xc002db2d10) (0xc00177e0a0) Stream removed, broadcasting: 1 I0412 00:20:32.249693 7 log.go:172] (0xc002db2d10) Go away received I0412 00:20:32.249794 7 log.go:172] (0xc002db2d10) (0xc00177e0a0) Stream removed, broadcasting: 1 I0412 00:20:32.249822 7 log.go:172] (0xc002db2d10) (0xc001147040) Stream removed, broadcasting: 3 I0412 00:20:32.249836 7 log.go:172] (0xc002db2d10) (0xc001b99c20) Stream removed, broadcasting: 5 Apr 12 00:20:32.249: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:32.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6114" for this suite. • [SLOW TEST:24.485 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2608,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:32.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 12 00:20:32.301: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 12 00:20:32.321: INFO: Waiting for terminating namespaces to be deleted... Apr 12 00:20:32.324: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 12 00:20:32.331: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.331: INFO: Container kube-proxy ready: true, restart count 0 Apr 12 00:20:32.331: INFO: host-test-container-pod from pod-network-test-6114 started at 2020-04-12 00:20:28 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.331: INFO: Container agnhost ready: true, restart count 0 Apr 12 00:20:32.331: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.331: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:20:32.331: INFO: netserver-0 from pod-network-test-6114 started at 2020-04-12 00:20:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.331: INFO: Container webserver ready: true, restart count 0 Apr 12 00:20:32.331: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 12 00:20:32.336: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.336: INFO: Container kube-proxy ready: true, restart count 0 Apr 12 00:20:32.336: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.336: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:20:32.336: INFO: netserver-1 from pod-network-test-6114 started at 2020-04-12 00:20:07 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.336: INFO: Container webserver ready: true, restart count 0 Apr 12 00:20:32.336: INFO: test-container-pod from pod-network-test-6114 started at 2020-04-12 00:20:28 +0000 UTC (1 container statuses recorded) Apr 12 00:20:32.336: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1604ea868e795982], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:33.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-936" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":164,"skipped":2624,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:33.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-778/configmap-test-040f0dc8-1e73-4856-acbf-39d750a9bc82 STEP: Creating a pod to test consume configMaps Apr 12 00:20:33.459: INFO: Waiting up to 5m0s for pod "pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea" in namespace "configmap-778" to be "Succeeded or Failed" Apr 12 00:20:33.463: INFO: Pod "pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167481ms Apr 12 00:20:35.466: INFO: Pod "pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0067178s Apr 12 00:20:37.499: INFO: Pod "pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040043639s STEP: Saw pod success Apr 12 00:20:37.500: INFO: Pod "pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea" satisfied condition "Succeeded or Failed" Apr 12 00:20:37.671: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea container env-test: STEP: delete the pod Apr 12 00:20:37.948: INFO: Waiting for pod pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea to disappear Apr 12 00:20:37.978: INFO: Pod pod-configmaps-65c74061-aa2f-4abf-87bc-fe0f6fb1a7ea no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:37.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-778" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2642,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:38.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342 Apr 12 00:20:38.528: INFO: Pod name my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342: Found 0 pods out of 1 Apr 12 00:20:43.532: INFO: Pod name my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342: Found 1 pods out of 1 Apr 12 00:20:43.532: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342" are running Apr 12 00:20:43.536: INFO: Pod "my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342-fswcz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:20:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:20:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:20:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-12 00:20:38 +0000 UTC Reason: Message:}]) Apr 12 00:20:43.536: INFO: Trying to dial the pod Apr 12 00:20:48.546: INFO: Controller my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342: Got expected result from replica 1 [my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342-fswcz]: "my-hostname-basic-4d7c1c61-7506-4b32-bb5a-113587195342-fswcz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:20:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9936" for this suite. • [SLOW TEST:10.519 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":166,"skipped":2648,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:20:48.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6903 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 12 00:20:48.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 12 00:20:48.699: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:20:50.864: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:20:52.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:54.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:56.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:20:58.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:21:00.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:21:02.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:21:04.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 12 00:21:06.703: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 12 00:21:06.708: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 12 00:21:10.731: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.93:8080/dial?request=hostname&protocol=udp&host=10.244.2.92&port=8081&tries=1'] Namespace:pod-network-test-6903 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:21:10.731: INFO: >>> kubeConfig: /root/.kube/config I0412 00:21:10.765763 7 log.go:172] (0xc0030a3080) (0xc0014246e0) Create stream I0412 00:21:10.765793 7 log.go:172] (0xc0030a3080) (0xc0014246e0) Stream added, broadcasting: 1 I0412 00:21:10.767802 7 log.go:172] (0xc0030a3080) Reply frame received for 1 I0412 00:21:10.767844 7 log.go:172] (0xc0030a3080) (0xc000f8c280) Create stream I0412 00:21:10.767858 7 log.go:172] (0xc0030a3080) (0xc000f8c280) Stream added, broadcasting: 3 I0412 00:21:10.768759 7 log.go:172] (0xc0030a3080) Reply frame received for 3 I0412 00:21:10.768814 7 log.go:172] (0xc0030a3080) (0xc001424820) Create stream I0412 00:21:10.768829 7 log.go:172] (0xc0030a3080) (0xc001424820) Stream added, broadcasting: 5 I0412 00:21:10.770107 7 log.go:172] (0xc0030a3080) Reply frame received for 5 I0412 00:21:10.861808 7 log.go:172] (0xc0030a3080) Data frame received for 3 I0412 00:21:10.861857 7 log.go:172] (0xc000f8c280) (3) Data frame handling I0412 00:21:10.861897 7 log.go:172] (0xc000f8c280) (3) Data frame sent I0412 00:21:10.862170 7 log.go:172] (0xc0030a3080) Data frame received for 5 I0412 00:21:10.862228 7 log.go:172] (0xc001424820) (5) Data frame handling I0412 00:21:10.862282 7 log.go:172] (0xc0030a3080) Data frame received for 3 I0412 00:21:10.862329 7 log.go:172] (0xc000f8c280) (3) Data frame handling I0412 00:21:10.864064 7 log.go:172] (0xc0030a3080) Data frame received for 1 I0412 00:21:10.864085 7 log.go:172] (0xc0014246e0) (1) Data frame handling I0412 00:21:10.864121 7 log.go:172] (0xc0014246e0) (1) Data frame sent I0412 00:21:10.864318 7 log.go:172] (0xc0030a3080) (0xc0014246e0) Stream removed, broadcasting: 1 I0412 00:21:10.864380 7 log.go:172] (0xc0030a3080) Go away received I0412 00:21:10.864447 7 log.go:172] (0xc0030a3080) (0xc0014246e0) Stream removed, broadcasting: 1 I0412 00:21:10.864468 7 log.go:172] (0xc0030a3080) (0xc000f8c280) Stream removed, broadcasting: 3 I0412 00:21:10.864480 7 log.go:172] (0xc0030a3080) (0xc001424820) Stream removed, broadcasting: 5 Apr 12 00:21:10.864: INFO: Waiting for responses: map[] Apr 12 00:21:10.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.93:8080/dial?request=hostname&protocol=udp&host=10.244.1.121&port=8081&tries=1'] Namespace:pod-network-test-6903 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:21:10.868: INFO: >>> kubeConfig: /root/.kube/config I0412 00:21:10.903716 7 log.go:172] (0xc002750d10) (0xc001f30780) Create stream I0412 00:21:10.903745 7 log.go:172] (0xc002750d10) (0xc001f30780) Stream added, broadcasting: 1 I0412 00:21:10.906223 7 log.go:172] (0xc002750d10) Reply frame received for 1 I0412 00:21:10.906262 7 log.go:172] (0xc002750d10) (0xc0011ed900) Create stream I0412 00:21:10.906271 7 log.go:172] (0xc002750d10) (0xc0011ed900) Stream added, broadcasting: 3 I0412 00:21:10.907051 7 log.go:172] (0xc002750d10) Reply frame received for 3 I0412 00:21:10.907082 7 log.go:172] (0xc002750d10) (0xc001f308c0) Create stream I0412 00:21:10.907094 7 log.go:172] (0xc002750d10) (0xc001f308c0) Stream added, broadcasting: 5 I0412 00:21:10.907814 7 log.go:172] (0xc002750d10) Reply frame received for 5 I0412 00:21:10.968249 7 log.go:172] (0xc002750d10) Data frame received for 3 I0412 00:21:10.968286 7 log.go:172] (0xc0011ed900) (3) Data frame handling I0412 00:21:10.968305 7 log.go:172] (0xc0011ed900) (3) Data frame sent I0412 00:21:10.968546 7 log.go:172] (0xc002750d10) Data frame received for 5 I0412 00:21:10.968564 7 log.go:172] (0xc001f308c0) (5) Data frame handling I0412 00:21:10.968612 7 log.go:172] (0xc002750d10) Data frame received for 3 I0412 00:21:10.968628 7 log.go:172] (0xc0011ed900) (3) Data frame handling I0412 00:21:10.970487 7 log.go:172] (0xc002750d10) Data frame received for 1 I0412 00:21:10.970522 7 log.go:172] (0xc001f30780) (1) Data frame handling I0412 00:21:10.970552 7 log.go:172] (0xc001f30780) (1) Data frame sent I0412 00:21:10.970576 7 log.go:172] (0xc002750d10) (0xc001f30780) Stream removed, broadcasting: 1 I0412 00:21:10.970605 7 log.go:172] (0xc002750d10) Go away received I0412 00:21:10.970757 7 log.go:172] (0xc002750d10) (0xc001f30780) Stream removed, broadcasting: 1 I0412 00:21:10.970800 7 log.go:172] (0xc002750d10) (0xc0011ed900) Stream removed, broadcasting: 3 I0412 00:21:10.970821 7 log.go:172] (0xc002750d10) (0xc001f308c0) Stream removed, broadcasting: 5 Apr 12 00:21:10.970: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:10.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6903" for this suite. • [SLOW TEST:22.427 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2648,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:10.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3340 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-3340 Apr 12 00:21:11.074: INFO: Found 0 stateful pods, waiting for 1 Apr 12 00:21:21.079: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 12 00:21:21.100: INFO: Deleting all statefulset in ns statefulset-3340 Apr 12 00:21:21.139: INFO: Scaling statefulset ss to 0 Apr 12 00:21:41.194: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:21:41.198: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:41.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3340" for this suite. • [SLOW TEST:30.238 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":168,"skipped":2652,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:41.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:21:41.277: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.551752ms) Apr 12 00:21:41.280: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.747757ms) Apr 12 00:21:41.283: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.035123ms) Apr 12 00:21:41.286: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.707779ms) Apr 12 00:21:41.289: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.057534ms) Apr 12 00:21:41.313: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 23.818778ms) Apr 12 00:21:41.316: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.883892ms) Apr 12 00:21:41.319: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.716251ms) Apr 12 00:21:41.322: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.209399ms) Apr 12 00:21:41.325: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.075381ms) Apr 12 00:21:41.328: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.629605ms) Apr 12 00:21:41.331: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.0698ms) Apr 12 00:21:41.334: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.387662ms) Apr 12 00:21:41.338: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.638949ms) Apr 12 00:21:41.341: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.321341ms) Apr 12 00:21:41.345: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.547263ms) Apr 12 00:21:41.348: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.558227ms) Apr 12 00:21:41.352: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.927282ms) Apr 12 00:21:41.356: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.659118ms) Apr 12 00:21:41.360: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.765706ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:41.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7168" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":169,"skipped":2656,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:41.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0412 00:21:42.489452 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 12 00:21:42.489: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:42.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4876" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":170,"skipped":2663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:42.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:21:42.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3" in namespace "downward-api-5534" to be "Succeeded or Failed" Apr 12 00:21:42.573: INFO: Pod "downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078309ms Apr 12 00:21:44.578: INFO: Pod "downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00834991s Apr 12 00:21:46.594: INFO: Pod "downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024541065s STEP: Saw pod success Apr 12 00:21:46.594: INFO: Pod "downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3" satisfied condition "Succeeded or Failed" Apr 12 00:21:46.596: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3 container client-container: STEP: delete the pod Apr 12 00:21:46.661: INFO: Waiting for pod downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3 to disappear Apr 12 00:21:46.738: INFO: Pod downwardapi-volume-7ff9d59b-7aa8-4440-b1a5-db1ab69166e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:46.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5534" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2733,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:46.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 12 00:21:46.887: INFO: Waiting up to 5m0s for pod "pod-6264312b-1ee7-4948-858d-132da7b73712" in namespace "emptydir-2044" to be "Succeeded or Failed" Apr 12 00:21:46.915: INFO: Pod "pod-6264312b-1ee7-4948-858d-132da7b73712": Phase="Pending", Reason="", readiness=false. Elapsed: 27.922324ms Apr 12 00:21:48.918: INFO: Pod "pod-6264312b-1ee7-4948-858d-132da7b73712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031502235s Apr 12 00:21:50.923: INFO: Pod "pod-6264312b-1ee7-4948-858d-132da7b73712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036183766s STEP: Saw pod success Apr 12 00:21:50.923: INFO: Pod "pod-6264312b-1ee7-4948-858d-132da7b73712" satisfied condition "Succeeded or Failed" Apr 12 00:21:50.927: INFO: Trying to get logs from node latest-worker2 pod pod-6264312b-1ee7-4948-858d-132da7b73712 container test-container: STEP: delete the pod Apr 12 00:21:50.950: INFO: Waiting for pod pod-6264312b-1ee7-4948-858d-132da7b73712 to disappear Apr 12 00:21:50.955: INFO: Pod pod-6264312b-1ee7-4948-858d-132da7b73712 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:21:50.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2044" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2754,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:21:50.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:21:51.093: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 12 00:21:56.133: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 12 00:21:56.133: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 12 00:21:58.137: INFO: Creating deployment "test-rollover-deployment" Apr 12 00:21:58.163: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 12 00:22:00.170: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 12 00:22:00.177: INFO: Ensure that both replica sets have 1 created replica Apr 12 00:22:00.181: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 12 00:22:00.186: INFO: Updating deployment test-rollover-deployment Apr 12 00:22:00.186: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 12 00:22:02.227: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 12 00:22:02.234: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 12 00:22:02.239: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:02.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247720, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:04.247: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:04.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247723, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:06.247: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:06.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247723, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:08.248: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:08.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247723, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:10.247: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:10.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247723, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:12.246: INFO: all replica sets need to contain the pod-template-hash label Apr 12 00:22:12.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247723, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247718, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:22:14.248: INFO: Apr 12 00:22:14.248: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 12 00:22:14.255: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8612 /apis/apps/v1/namespaces/deployment-8612/deployments/test-rollover-deployment de1589a5-7a09-474a-849f-d1eae867b994 7343941 2 2020-04-12 00:21:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027d33a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-12 00:21:58 +0000 UTC,LastTransitionTime:2020-04-12 00:21:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-12 00:22:13 +0000 UTC,LastTransitionTime:2020-04-12 00:21:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 12 00:22:14.258: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-8612 /apis/apps/v1/namespaces/deployment-8612/replicasets/test-rollover-deployment-78df7bc796 decfde61-3348-4cc4-b1e2-92f211a50ca7 7343930 2 2020-04-12 00:22:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment de1589a5-7a09-474a-849f-d1eae867b994 0xc0029317e7 0xc0029317e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002931858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:22:14.258: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 12 00:22:14.258: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8612 /apis/apps/v1/namespaces/deployment-8612/replicasets/test-rollover-controller a8792e1f-3218-4b20-9daf-9c1f88527b0e 7343940 2 2020-04-12 00:21:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment de1589a5-7a09-474a-849f-d1eae867b994 0xc002931337 0xc002931338}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0029314b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:22:14.258: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8612 /apis/apps/v1/namespaces/deployment-8612/replicasets/test-rollover-deployment-f6c94f66c 2ec22c9f-18fe-4d47-9d39-719124fccd9d 7343881 2 2020-04-12 00:21:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment de1589a5-7a09-474a-849f-d1eae867b994 0xc0029319a0 0xc0029319a1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002931a18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:22:14.260: INFO: Pod "test-rollover-deployment-78df7bc796-ccb56" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-ccb56 test-rollover-deployment-78df7bc796- deployment-8612 /api/v1/namespaces/deployment-8612/pods/test-rollover-deployment-78df7bc796-ccb56 57950eaa-7b6f-4bf8-8228-b59984990014 7343898 0 2020-04-12 00:22:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 decfde61-3348-4cc4-b1e2-92f211a50ca7 0xc002f20f97 0xc002f20f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-htvg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-htvg4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-htvg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:22:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:22:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:22:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:22:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.97,StartTime:2020-04-12 00:22:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:22:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://924564e7e884f8724f065f32724bdf418732170f0b1b94db75136732dd2f1b4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:14.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8612" for this suite. • [SLOW TEST:23.306 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":173,"skipped":2756,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:14.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:22:14.706: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:22:16.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247734, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247734, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247734, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722247734, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:22:19.758: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:20.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6555" for this suite. STEP: Destroying namespace "webhook-6555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":174,"skipped":2773,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:20.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 12 00:22:25.402: INFO: Successfully updated pod "labelsupdatea102bb21-0ede-4d66-828d-10455ab05042" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:27.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6472" for this suite. • [SLOW TEST:6.691 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2792,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:27.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 12 00:22:27.498: INFO: Waiting up to 5m0s for pod "pod-73576a46-9782-4263-987a-512d4a08c7ed" in namespace "emptydir-8714" to be "Succeeded or Failed" Apr 12 00:22:27.502: INFO: Pod "pod-73576a46-9782-4263-987a-512d4a08c7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929764ms Apr 12 00:22:29.506: INFO: Pod "pod-73576a46-9782-4263-987a-512d4a08c7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008355464s Apr 12 00:22:31.510: INFO: Pod "pod-73576a46-9782-4263-987a-512d4a08c7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012655952s STEP: Saw pod success Apr 12 00:22:31.510: INFO: Pod "pod-73576a46-9782-4263-987a-512d4a08c7ed" satisfied condition "Succeeded or Failed" Apr 12 00:22:31.514: INFO: Trying to get logs from node latest-worker2 pod pod-73576a46-9782-4263-987a-512d4a08c7ed container test-container: STEP: delete the pod Apr 12 00:22:31.533: INFO: Waiting for pod pod-73576a46-9782-4263-987a-512d4a08c7ed to disappear Apr 12 00:22:31.537: INFO: Pod pod-73576a46-9782-4263-987a-512d4a08c7ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8714" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:31.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 12 00:22:31.593: INFO: Waiting up to 5m0s for pod "pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f" in namespace "emptydir-5353" to be "Succeeded or Failed" Apr 12 00:22:31.630: INFO: Pod "pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.562456ms Apr 12 00:22:33.635: INFO: Pod "pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041871813s Apr 12 00:22:35.639: INFO: Pod "pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046166112s STEP: Saw pod success Apr 12 00:22:35.639: INFO: Pod "pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f" satisfied condition "Succeeded or Failed" Apr 12 00:22:35.642: INFO: Trying to get logs from node latest-worker2 pod pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f container test-container: STEP: delete the pod Apr 12 00:22:35.660: INFO: Waiting for pod pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f to disappear Apr 12 00:22:35.680: INFO: Pod pod-3ac671a5-1d9f-46ef-977e-e0d3cc53735f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:35.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5353" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2866,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:35.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 12 00:22:35.824: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8914 /api/v1/namespaces/watch-8914/configmaps/e2e-watch-test-resource-version 3e490449-1a4f-41d8-90ff-0ef0d051d1bc 7344183 0 2020-04-12 00:22:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:22:35.824: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8914 /api/v1/namespaces/watch-8914/configmaps/e2e-watch-test-resource-version 3e490449-1a4f-41d8-90ff-0ef0d051d1bc 7344184 0 2020-04-12 00:22:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:35.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8914" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":178,"skipped":2881,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:35.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:22:35.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9" in namespace "downward-api-7697" to be "Succeeded or Failed" Apr 12 00:22:35.911: INFO: Pod "downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.856898ms Apr 12 00:22:37.915: INFO: Pod "downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008020838s Apr 12 00:22:39.920: INFO: Pod "downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012466121s STEP: Saw pod success Apr 12 00:22:39.920: INFO: Pod "downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9" satisfied condition "Succeeded or Failed" Apr 12 00:22:39.923: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9 container client-container: STEP: delete the pod Apr 12 00:22:39.984: INFO: Waiting for pod downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9 to disappear Apr 12 00:22:39.987: INFO: Pod downwardapi-volume-a098b688-e4e7-4472-b850-7ee529e639a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:39.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7697" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":2888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:39.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 12 00:22:44.566: INFO: Successfully updated pod "pod-update-3a8dfcce-e006-4b05-bd77-2ccfe07f9e25" STEP: verifying the updated pod is in kubernetes Apr 12 00:22:44.576: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:44.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2935" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":2940,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:44.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:22:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-730" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":181,"skipped":2953,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:22:55.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 12 00:23:00.935: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:01.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1270" for this suite. • [SLOW TEST:6.246 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":182,"skipped":2959,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:01.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 12 00:23:02.086: INFO: Waiting up to 5m0s for pod "pod-e36452a4-a834-487a-a0cb-0c5b42228ee4" in namespace "emptydir-7245" to be "Succeeded or Failed" Apr 12 00:23:02.091: INFO: Pod "pod-e36452a4-a834-487a-a0cb-0c5b42228ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014831ms Apr 12 00:23:04.114: INFO: Pod "pod-e36452a4-a834-487a-a0cb-0c5b42228ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027353716s Apr 12 00:23:06.118: INFO: Pod "pod-e36452a4-a834-487a-a0cb-0c5b42228ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031546236s STEP: Saw pod success Apr 12 00:23:06.118: INFO: Pod "pod-e36452a4-a834-487a-a0cb-0c5b42228ee4" satisfied condition "Succeeded or Failed" Apr 12 00:23:06.121: INFO: Trying to get logs from node latest-worker2 pod pod-e36452a4-a834-487a-a0cb-0c5b42228ee4 container test-container: STEP: delete the pod Apr 12 00:23:06.140: INFO: Waiting for pod pod-e36452a4-a834-487a-a0cb-0c5b42228ee4 to disappear Apr 12 00:23:06.188: INFO: Pod pod-e36452a4-a834-487a-a0cb-0c5b42228ee4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:06.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7245" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:06.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 12 00:23:06.230: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:06.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-718" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":184,"skipped":3012,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:06.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 12 00:23:06.381: INFO: Waiting up to 5m0s for pod "client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4" in namespace "containers-5774" to be "Succeeded or Failed" Apr 12 00:23:06.389: INFO: Pod "client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26485ms Apr 12 00:23:08.393: INFO: Pod "client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012582077s Apr 12 00:23:10.397: INFO: Pod "client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016372958s STEP: Saw pod success Apr 12 00:23:10.397: INFO: Pod "client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4" satisfied condition "Succeeded or Failed" Apr 12 00:23:10.399: INFO: Trying to get logs from node latest-worker pod client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4 container test-container: STEP: delete the pod Apr 12 00:23:10.414: INFO: Waiting for pod client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4 to disappear Apr 12 00:23:10.431: INFO: Pod client-containers-d069c902-0a71-4dba-9c05-3a2b4d5e47e4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:10.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5774" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3022,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:10.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-904ba67d-6dbc-4e76-8ad9-0fd57c20dc4c STEP: Creating a pod to test consume configMaps Apr 12 00:23:10.516: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01" in namespace "projected-7943" to be "Succeeded or Failed" Apr 12 00:23:10.546: INFO: Pod "pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 29.803937ms Apr 12 00:23:12.550: INFO: Pod "pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033916084s Apr 12 00:23:14.555: INFO: Pod "pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038450642s STEP: Saw pod success Apr 12 00:23:14.555: INFO: Pod "pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01" satisfied condition "Succeeded or Failed" Apr 12 00:23:14.558: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01 container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:23:14.578: INFO: Waiting for pod pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01 to disappear Apr 12 00:23:14.583: INFO: Pod pod-projected-configmaps-bf4784be-8e4e-4cca-8a5b-780521b7ce01 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7943" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:14.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:23:14.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce" in namespace "downward-api-7409" to be "Succeeded or Failed" Apr 12 00:23:14.686: INFO: Pod "downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 17.657278ms Apr 12 00:23:16.693: INFO: Pod "downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025318997s Apr 12 00:23:18.698: INFO: Pod "downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029727641s STEP: Saw pod success Apr 12 00:23:18.698: INFO: Pod "downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce" satisfied condition "Succeeded or Failed" Apr 12 00:23:18.701: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce container client-container: STEP: delete the pod Apr 12 00:23:18.747: INFO: Waiting for pod downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce to disappear Apr 12 00:23:18.787: INFO: Pod downwardapi-volume-dc1e9a50-7c64-42f0-af28-8a6af548b9ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:18.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7409" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:18.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:23:18.838: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 12 00:23:20.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8474 create -f -' Apr 12 00:23:23.946: INFO: stderr: "" Apr 12 00:23:23.946: INFO: stdout: "e2e-test-crd-publish-openapi-9158-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 12 00:23:23.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8474 delete e2e-test-crd-publish-openapi-9158-crds test-cr' Apr 12 00:23:24.048: INFO: stderr: "" Apr 12 00:23:24.048: INFO: stdout: "e2e-test-crd-publish-openapi-9158-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 12 00:23:24.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8474 apply -f -' Apr 12 00:23:24.291: INFO: stderr: "" Apr 12 00:23:24.291: INFO: stdout: "e2e-test-crd-publish-openapi-9158-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 12 00:23:24.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8474 delete e2e-test-crd-publish-openapi-9158-crds test-cr' Apr 12 00:23:24.411: INFO: stderr: "" Apr 12 00:23:24.411: INFO: stdout: "e2e-test-crd-publish-openapi-9158-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 12 00:23:24.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9158-crds' Apr 12 00:23:24.650: INFO: stderr: "" Apr 12 00:23:24.650: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9158-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:27.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8474" for this suite. • [SLOW TEST:8.767 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":188,"skipped":3110,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:27.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 12 00:23:27.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9684' Apr 12 00:23:27.707: INFO: stderr: "" Apr 12 00:23:27.707: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 12 00:23:32.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9684 -o json' Apr 12 00:23:32.857: INFO: stderr: "" Apr 12 00:23:32.857: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-12T00:23:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9684\",\n \"resourceVersion\": \"7344622\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9684/pods/e2e-test-httpd-pod\",\n \"uid\": \"7890dcaa-6f9a-44ab-b44c-b32cb967e8dd\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c6v5t\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c6v5t\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c6v5t\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-12T00:23:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-12T00:23:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-12T00:23:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-12T00:23:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://88d74add472c90a42a40fed88c72493ac4627491dcdc922e44c5cd9795ed425d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-12T00:23:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.134\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.134\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-12T00:23:27Z\"\n }\n}\n" STEP: replace the image in the pod Apr 12 00:23:32.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9684' Apr 12 00:23:33.171: INFO: stderr: "" Apr 12 00:23:33.171: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 12 00:23:33.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9684' Apr 12 00:23:42.997: INFO: stderr: "" Apr 12 00:23:42.998: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:42.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9684" for this suite. • [SLOW TEST:15.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":189,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:23:43.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6" in namespace "downward-api-1431" to be "Succeeded or Failed" Apr 12 00:23:43.086: INFO: Pod "downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.714044ms Apr 12 00:23:45.090: INFO: Pod "downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007806237s Apr 12 00:23:47.094: INFO: Pod "downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011587406s STEP: Saw pod success Apr 12 00:23:47.094: INFO: Pod "downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6" satisfied condition "Succeeded or Failed" Apr 12 00:23:47.097: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6 container client-container: STEP: delete the pod Apr 12 00:23:47.118: INFO: Waiting for pod downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6 to disappear Apr 12 00:23:47.164: INFO: Pod downwardapi-volume-86e746d6-f230-4565-b23d-b8fdb1029de6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:47.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1431" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:47.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:23:51.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5232" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":191,"skipped":3172,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:23:51.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-ae913ded-3282-483b-bbcd-491366ab02ee in namespace container-probe-2907 Apr 12 00:23:55.482: INFO: Started pod test-webserver-ae913ded-3282-483b-bbcd-491366ab02ee in namespace container-probe-2907 STEP: checking the pod's current state and verifying that restartCount is present Apr 12 00:23:55.485: INFO: Initial restart count of pod test-webserver-ae913ded-3282-483b-bbcd-491366ab02ee is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:27:56.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2907" for this suite. • [SLOW TEST:244.737 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:27:56.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:27:56.158: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 12 00:27:56.201: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 12 00:28:01.206: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 12 00:28:01.206: INFO: Creating deployment "test-rolling-update-deployment" Apr 12 00:28:01.232: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 12 00:28:01.271: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 12 00:28:03.278: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 12 00:28:03.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248081, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248081, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248081, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248081, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:28:05.285: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 12 00:28:05.295: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7284 /apis/apps/v1/namespaces/deployment-7284/deployments/test-rolling-update-deployment 7f1b28ae-ceea-4c4f-a5cf-1f2e7c3e0737 7345565 1 2020-04-12 00:28:01 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002505368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-12 00:28:01 +0000 UTC,LastTransitionTime:2020-04-12 00:28:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-12 00:28:04 +0000 UTC,LastTransitionTime:2020-04-12 00:28:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 12 00:28:05.298: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-7284 /apis/apps/v1/namespaces/deployment-7284/replicasets/test-rolling-update-deployment-664dd8fc7f 5bdbb6d3-01bd-422c-9549-36533f299c0e 7345554 1 2020-04-12 00:28:01 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7f1b28ae-ceea-4c4f-a5cf-1f2e7c3e0737 0xc002505877 0xc002505878}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025058e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:28:05.298: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 12 00:28:05.298: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7284 /apis/apps/v1/namespaces/deployment-7284/replicasets/test-rolling-update-controller 1e6693c3-d523-419c-9428-ba789a5c839f 7345563 2 2020-04-12 00:27:56 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7f1b28ae-ceea-4c4f-a5cf-1f2e7c3e0737 0xc00250578f 0xc0025057a0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002505808 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:28:05.302: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-q6x22" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-q6x22 test-rolling-update-deployment-664dd8fc7f- deployment-7284 /api/v1/namespaces/deployment-7284/pods/test-rolling-update-deployment-664dd8fc7f-q6x22 6477dc1e-cb45-4fb6-93b7-728fbbddc229 7345553 0 2020-04-12 00:28:01 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 5bdbb6d3-01bd-422c-9549-36533f299c0e 0xc003a889b7 0xc003a889b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d8wn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d8wn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d8wn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:28:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:28:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.106,StartTime:2020-04-12 00:28:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:28:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://57e6408a9bcaa250f84e07f4c44e5d7011f5da16b3efa1cb06207179f594397d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:28:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7284" for this suite. • [SLOW TEST:9.190 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":193,"skipped":3231,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:28:05.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 12 00:28:05.373: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 12 00:28:05.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:05.637: INFO: stderr: "" Apr 12 00:28:05.637: INFO: stdout: "service/agnhost-slave created\n" Apr 12 00:28:05.638: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 12 00:28:05.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:05.901: INFO: stderr: "" Apr 12 00:28:05.901: INFO: stdout: "service/agnhost-master created\n" Apr 12 00:28:05.902: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 12 00:28:05.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:06.153: INFO: stderr: "" Apr 12 00:28:06.153: INFO: stdout: "service/frontend created\n" Apr 12 00:28:06.153: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 12 00:28:06.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:06.393: INFO: stderr: "" Apr 12 00:28:06.393: INFO: stdout: "deployment.apps/frontend created\n" Apr 12 00:28:06.394: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 12 00:28:06.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:06.677: INFO: stderr: "" Apr 12 00:28:06.677: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 12 00:28:06.677: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 12 00:28:06.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8932' Apr 12 00:28:06.940: INFO: stderr: "" Apr 12 00:28:06.940: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 12 00:28:06.940: INFO: Waiting for all frontend pods to be Running. Apr 12 00:28:16.991: INFO: Waiting for frontend to serve content. Apr 12 00:28:17.002: INFO: Trying to add a new entry to the guestbook. Apr 12 00:28:17.012: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 12 00:28:17.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.144: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.144: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 12 00:28:17.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.340: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 12 00:28:17.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.503: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.503: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 12 00:28:17.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.604: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.604: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 12 00:28:17.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.717: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.717: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 12 00:28:17.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8932' Apr 12 00:28:17.896: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:28:17.896: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:28:17.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8932" for this suite. • [SLOW TEST:12.934 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":194,"skipped":3232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:28:18.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 12 00:28:22.808: INFO: Pod pod-hostip-1e0b0d01-ba18-4e76-885e-d21f0bdef590 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:28:22.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3279" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3279,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:28:22.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-98c3402e-f3bd-44fc-9e06-f505df9263c4 STEP: Creating a pod to test consume configMaps Apr 12 00:28:22.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef" in namespace "configmap-6640" to be "Succeeded or Failed" Apr 12 00:28:22.965: INFO: Pod "pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef": Phase="Pending", Reason="", readiness=false. Elapsed: 80.09513ms Apr 12 00:28:24.970: INFO: Pod "pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084719642s Apr 12 00:28:26.974: INFO: Pod "pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088548544s STEP: Saw pod success Apr 12 00:28:26.974: INFO: Pod "pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef" satisfied condition "Succeeded or Failed" Apr 12 00:28:26.976: INFO: Trying to get logs from node latest-worker pod pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef container configmap-volume-test: STEP: delete the pod Apr 12 00:28:27.012: INFO: Waiting for pod pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef to disappear Apr 12 00:28:27.017: INFO: Pod pod-configmaps-31871a71-17e2-4863-b8ed-dc53e7d649ef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:28:27.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6640" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3287,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:28:27.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 12 00:28:27.700: INFO: Pod name wrapped-volume-race-08319437-8d58-4f65-b1ca-50ab8a0fde7b: Found 0 pods out of 5 Apr 12 00:28:32.708: INFO: Pod name wrapped-volume-race-08319437-8d58-4f65-b1ca-50ab8a0fde7b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08319437-8d58-4f65-b1ca-50ab8a0fde7b in namespace emptydir-wrapper-2838, will wait for the garbage collector to delete the pods Apr 12 00:28:46.789: INFO: Deleting ReplicationController wrapped-volume-race-08319437-8d58-4f65-b1ca-50ab8a0fde7b took: 6.181806ms Apr 12 00:28:47.189: INFO: Terminating ReplicationController wrapped-volume-race-08319437-8d58-4f65-b1ca-50ab8a0fde7b pods took: 400.308281ms STEP: Creating RC which spawns configmap-volume pods Apr 12 00:29:03.235: INFO: Pod name wrapped-volume-race-2186580e-3681-466d-8a42-44abf165199c: Found 0 pods out of 5 Apr 12 00:29:08.243: INFO: Pod name wrapped-volume-race-2186580e-3681-466d-8a42-44abf165199c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2186580e-3681-466d-8a42-44abf165199c in namespace emptydir-wrapper-2838, will wait for the garbage collector to delete the pods Apr 12 00:29:22.364: INFO: Deleting ReplicationController wrapped-volume-race-2186580e-3681-466d-8a42-44abf165199c took: 12.676874ms Apr 12 00:29:22.765: INFO: Terminating ReplicationController wrapped-volume-race-2186580e-3681-466d-8a42-44abf165199c pods took: 400.396509ms STEP: Creating RC which spawns configmap-volume pods Apr 12 00:29:32.908: INFO: Pod name wrapped-volume-race-1c62188a-a07b-4aa6-bf3e-e5859fc9059a: Found 0 pods out of 5 Apr 12 00:29:37.917: INFO: Pod name wrapped-volume-race-1c62188a-a07b-4aa6-bf3e-e5859fc9059a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1c62188a-a07b-4aa6-bf3e-e5859fc9059a in namespace emptydir-wrapper-2838, will wait for the garbage collector to delete the pods Apr 12 00:29:50.014: INFO: Deleting ReplicationController wrapped-volume-race-1c62188a-a07b-4aa6-bf3e-e5859fc9059a took: 5.519556ms Apr 12 00:29:50.414: INFO: Terminating ReplicationController wrapped-volume-race-1c62188a-a07b-4aa6-bf3e-e5859fc9059a pods took: 400.226749ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:04.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2838" for this suite. • [SLOW TEST:97.015 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":197,"skipped":3305,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:04.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 12 00:30:04.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 12 00:30:04.123: INFO: Waiting for terminating namespaces to be deleted... Apr 12 00:30:04.126: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 12 00:30:04.151: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:30:04.151: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:30:04.151: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:30:04.151: INFO: Container kube-proxy ready: true, restart count 0 Apr 12 00:30:04.151: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 12 00:30:04.364: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:30:04.364: INFO: Container kindnet-cni ready: true, restart count 0 Apr 12 00:30:04.364: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 12 00:30:04.364: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 12 00:30:04.458: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 12 00:30:04.458: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 12 00:30:04.458: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 12 00:30:04.458: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 12 00:30:04.458: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 12 00:30:04.464: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7.1604eb0bc4587ae0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1799/filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7.1604eb0c467c8465], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7.1604eb0c67206597], Reason = [Created], Message = [Created container filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7] STEP: Considering event: Type = [Normal], Name = [filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7.1604eb0c7a295219], Reason = [Started], Message = [Started container filler-pod-55f59b1a-211e-42cc-a21e-ae6062e24ef7] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981.1604eb0bc1d1843b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1799/filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981.1604eb0c07b95334], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981.1604eb0c48ff3b3e], Reason = [Created], Message = [Created container filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981] STEP: Considering event: Type = [Normal], Name = [filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981.1604eb0c5ba570c0], Reason = [Started], Message = [Started container filler-pod-9b67e6ad-446f-4063-98b5-5a32ea70b981] STEP: Considering event: Type = [Warning], Name = [additional-pod.1604eb0cb41c73a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:09.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1799" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.679 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":198,"skipped":3308,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:09.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 12 00:30:09.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-8355 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 12 00:30:09.941: INFO: stderr: "" Apr 12 00:30:09.941: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 12 00:30:09.941: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 12 00:30:09.941: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8355" to be "running and ready, or succeeded" Apr 12 00:30:09.944: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134983ms Apr 12 00:30:11.974: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032298619s Apr 12 00:30:13.992: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.05026961s Apr 12 00:30:13.992: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 12 00:30:13.992: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 12 00:30:13.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355' Apr 12 00:30:14.111: INFO: stderr: "" Apr 12 00:30:14.111: INFO: stdout: "I0412 00:30:12.583496 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/jz5 320\nI0412 00:30:12.783628 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/nmv 456\nI0412 00:30:12.983766 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/dxlp 573\nI0412 00:30:13.183658 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/j2wr 400\nI0412 00:30:13.383663 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/q29 240\nI0412 00:30:13.583730 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/msv 253\nI0412 00:30:13.783656 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/rtf7 421\nI0412 00:30:13.983705 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/dzl 466\n" STEP: limiting log lines Apr 12 00:30:14.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355 --tail=1' Apr 12 00:30:14.217: INFO: stderr: "" Apr 12 00:30:14.217: INFO: stdout: "I0412 00:30:14.183641 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/74h 215\n" Apr 12 00:30:14.217: INFO: got output "I0412 00:30:14.183641 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/74h 215\n" STEP: limiting log bytes Apr 12 00:30:14.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355 --limit-bytes=1' Apr 12 00:30:14.331: INFO: stderr: "" Apr 12 00:30:14.331: INFO: stdout: "I" Apr 12 00:30:14.331: INFO: got output "I" STEP: exposing timestamps Apr 12 00:30:14.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355 --tail=1 --timestamps' Apr 12 00:30:14.447: INFO: stderr: "" Apr 12 00:30:14.447: INFO: stdout: "2020-04-12T00:30:14.383841728Z I0412 00:30:14.383660 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/8bvs 325\n" Apr 12 00:30:14.447: INFO: got output "2020-04-12T00:30:14.383841728Z I0412 00:30:14.383660 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/8bvs 325\n" STEP: restricting to a time range Apr 12 00:30:16.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355 --since=1s' Apr 12 00:30:17.063: INFO: stderr: "" Apr 12 00:30:17.063: INFO: stdout: "I0412 00:30:16.183630 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/zwkf 236\nI0412 00:30:16.383680 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/lvn 225\nI0412 00:30:16.583671 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/nzz 513\nI0412 00:30:16.783650 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/wpb5 250\nI0412 00:30:16.983641 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/5jh 506\n" Apr 12 00:30:17.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8355 --since=24h' Apr 12 00:30:17.172: INFO: stderr: "" Apr 12 00:30:17.172: INFO: stdout: "I0412 00:30:12.583496 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/jz5 320\nI0412 00:30:12.783628 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/nmv 456\nI0412 00:30:12.983766 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/dxlp 573\nI0412 00:30:13.183658 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/j2wr 400\nI0412 00:30:13.383663 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/q29 240\nI0412 00:30:13.583730 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/msv 253\nI0412 00:30:13.783656 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/rtf7 421\nI0412 00:30:13.983705 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/dzl 466\nI0412 00:30:14.183641 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/74h 215\nI0412 00:30:14.383660 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/8bvs 325\nI0412 00:30:14.583658 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/gj4 433\nI0412 00:30:14.783654 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/zwrf 565\nI0412 00:30:14.983657 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/kv7j 526\nI0412 00:30:15.183662 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/b6n 226\nI0412 00:30:15.383667 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/rjr 423\nI0412 00:30:15.583650 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/vk2b 480\nI0412 00:30:15.783649 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/dhm 534\nI0412 00:30:15.983663 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/jpj9 478\nI0412 00:30:16.183630 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/zwkf 236\nI0412 00:30:16.383680 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/lvn 225\nI0412 00:30:16.583671 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/nzz 513\nI0412 00:30:16.783650 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/wpb5 250\nI0412 00:30:16.983641 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/5jh 506\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 12 00:30:17.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8355' Apr 12 00:30:22.982: INFO: stderr: "" Apr 12 00:30:22.982: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:22.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8355" for this suite. • [SLOW TEST:13.273 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":199,"skipped":3327,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:22.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-jqcz STEP: Creating a pod to test atomic-volume-subpath Apr 12 00:30:23.122: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jqcz" in namespace "subpath-8634" to be "Succeeded or Failed" Apr 12 00:30:23.139: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.462482ms Apr 12 00:30:25.143: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021761062s Apr 12 00:30:27.148: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 4.026301649s Apr 12 00:30:29.152: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 6.030779002s Apr 12 00:30:31.156: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 8.034331369s Apr 12 00:30:33.160: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 10.038285947s Apr 12 00:30:35.164: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 12.042769276s Apr 12 00:30:37.169: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 14.047372844s Apr 12 00:30:39.174: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 16.05192891s Apr 12 00:30:41.178: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 18.056129249s Apr 12 00:30:43.181: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 20.059789884s Apr 12 00:30:45.186: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Running", Reason="", readiness=true. Elapsed: 22.064018947s Apr 12 00:30:47.190: INFO: Pod "pod-subpath-test-projected-jqcz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068455301s STEP: Saw pod success Apr 12 00:30:47.190: INFO: Pod "pod-subpath-test-projected-jqcz" satisfied condition "Succeeded or Failed" Apr 12 00:30:47.194: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-jqcz container test-container-subpath-projected-jqcz: STEP: delete the pod Apr 12 00:30:47.242: INFO: Waiting for pod pod-subpath-test-projected-jqcz to disappear Apr 12 00:30:47.272: INFO: Pod pod-subpath-test-projected-jqcz no longer exists STEP: Deleting pod pod-subpath-test-projected-jqcz Apr 12 00:30:47.272: INFO: Deleting pod "pod-subpath-test-projected-jqcz" in namespace "subpath-8634" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:47.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8634" for this suite. • [SLOW TEST:24.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":200,"skipped":3334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:47.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:51.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4788" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":201,"skipped":3367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:52.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7362.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7362.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7362.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7362.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:30:58.166: INFO: DNS probes using dns-7362/dns-test-cf48cdb1-6cc7-46c2-81fa-6b1261e605e8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:30:58.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7362" for this suite. • [SLOW TEST:6.283 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":202,"skipped":3399,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:30:58.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 12 00:30:58.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9726' Apr 12 00:30:58.950: INFO: stderr: "" Apr 12 00:30:58.950: INFO: stdout: "pod/pause created\n" Apr 12 00:30:58.950: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 12 00:30:58.950: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9726" to be "running and ready" Apr 12 00:30:58.968: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.551036ms Apr 12 00:31:00.991: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040957943s Apr 12 00:31:02.995: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.04528834s Apr 12 00:31:02.995: INFO: Pod "pause" satisfied condition "running and ready" Apr 12 00:31:02.995: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 12 00:31:02.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9726' Apr 12 00:31:03.095: INFO: stderr: "" Apr 12 00:31:03.095: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 12 00:31:03.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9726' Apr 12 00:31:03.188: INFO: stderr: "" Apr 12 00:31:03.188: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 12 00:31:03.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9726' Apr 12 00:31:03.315: INFO: stderr: "" Apr 12 00:31:03.315: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 12 00:31:03.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9726' Apr 12 00:31:03.399: INFO: stderr: "" Apr 12 00:31:03.399: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 12 00:31:03.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9726' Apr 12 00:31:03.552: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 12 00:31:03.552: INFO: stdout: "pod \"pause\" force deleted\n" Apr 12 00:31:03.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9726' Apr 12 00:31:03.648: INFO: stderr: "No resources found in kubectl-9726 namespace.\n" Apr 12 00:31:03.648: INFO: stdout: "" Apr 12 00:31:03.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9726 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 12 00:31:03.862: INFO: stderr: "" Apr 12 00:31:03.862: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:31:03.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9726" for this suite. • [SLOW TEST:5.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":203,"skipped":3407,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:31:03.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:31:08.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5525" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:31:08.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:31:14.151: INFO: DNS probes using dns-test-8d5f9589-5b2a-4b4e-ae61-6e424d2576c3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:31:20.321: INFO: File wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:20.325: INFO: File jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:20.325: INFO: Lookups using dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e failed for: [wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local] Apr 12 00:31:25.329: INFO: File wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:25.332: INFO: File jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:25.332: INFO: Lookups using dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e failed for: [wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local] Apr 12 00:31:30.330: INFO: File wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:30.335: INFO: File jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:30.335: INFO: Lookups using dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e failed for: [wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local] Apr 12 00:31:35.330: INFO: File wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:35.333: INFO: File jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:35.334: INFO: Lookups using dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e failed for: [wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local] Apr 12 00:31:40.330: INFO: File wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:40.334: INFO: File jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local from pod dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 12 00:31:40.334: INFO: Lookups using dns-6772/dns-test-e141def0-7839-45b3-bed0-085dfd22533e failed for: [wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local] Apr 12 00:31:45.335: INFO: DNS probes using dns-test-e141def0-7839-45b3-bed0-085dfd22533e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6772.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6772.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:31:51.704: INFO: DNS probes using dns-test-0b67d299-ae56-4790-ba62-14aa817ef4e5 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:31:51.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6772" for this suite. • [SLOW TEST:43.737 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":205,"skipped":3495,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:31:51.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:31:52.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2" in namespace "projected-6865" to be "Succeeded or Failed" Apr 12 00:31:52.220: INFO: Pod "downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.075494ms Apr 12 00:31:54.279: INFO: Pod "downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09781047s Apr 12 00:31:56.297: INFO: Pod "downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115856341s STEP: Saw pod success Apr 12 00:31:56.297: INFO: Pod "downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2" satisfied condition "Succeeded or Failed" Apr 12 00:31:56.299: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2 container client-container: STEP: delete the pod Apr 12 00:31:56.463: INFO: Waiting for pod downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2 to disappear Apr 12 00:31:56.491: INFO: Pod downwardapi-volume-8f802a15-e2a9-4fd2-bc10-d3bc6d9d07b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:31:56.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6865" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3499,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:31:56.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 12 00:31:56.572: INFO: Waiting up to 5m0s for pod "downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed" in namespace "downward-api-9253" to be "Succeeded or Failed" Apr 12 00:31:56.576: INFO: Pod "downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.102187ms Apr 12 00:31:58.579: INFO: Pod "downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006380463s Apr 12 00:32:00.582: INFO: Pod "downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009639371s STEP: Saw pod success Apr 12 00:32:00.582: INFO: Pod "downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed" satisfied condition "Succeeded or Failed" Apr 12 00:32:00.585: INFO: Trying to get logs from node latest-worker pod downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed container dapi-container: STEP: delete the pod Apr 12 00:32:00.675: INFO: Waiting for pod downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed to disappear Apr 12 00:32:00.682: INFO: Pod downward-api-ea2b579c-e5cc-44b7-9fa2-30bb433616ed no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:32:00.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9253" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:32:00.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:32:01.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:32:03.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248321, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248321, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248321, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248321, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:32:06.227: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:32:18.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4767" for this suite. STEP: Destroying namespace "webhook-4767-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.739 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":208,"skipped":3554,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:32:18.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 12 00:32:18.524: INFO: Waiting up to 5m0s for pod "downward-api-037119d6-7b26-4e34-9a02-36076b863b42" in namespace "downward-api-1181" to be "Succeeded or Failed" Apr 12 00:32:18.543: INFO: Pod "downward-api-037119d6-7b26-4e34-9a02-36076b863b42": Phase="Pending", Reason="", readiness=false. Elapsed: 18.706339ms Apr 12 00:32:20.547: INFO: Pod "downward-api-037119d6-7b26-4e34-9a02-36076b863b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022720236s Apr 12 00:32:22.551: INFO: Pod "downward-api-037119d6-7b26-4e34-9a02-36076b863b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027185955s STEP: Saw pod success Apr 12 00:32:22.551: INFO: Pod "downward-api-037119d6-7b26-4e34-9a02-36076b863b42" satisfied condition "Succeeded or Failed" Apr 12 00:32:22.554: INFO: Trying to get logs from node latest-worker2 pod downward-api-037119d6-7b26-4e34-9a02-36076b863b42 container dapi-container: STEP: delete the pod Apr 12 00:32:22.577: INFO: Waiting for pod downward-api-037119d6-7b26-4e34-9a02-36076b863b42 to disappear Apr 12 00:32:22.603: INFO: Pod downward-api-037119d6-7b26-4e34-9a02-36076b863b42 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:32:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1181" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3564,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:32:22.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-4195675a-087f-42ce-8e79-3b771444960b in namespace container-probe-2826 Apr 12 00:32:26.700: INFO: Started pod liveness-4195675a-087f-42ce-8e79-3b771444960b in namespace container-probe-2826 STEP: checking the pod's current state and verifying that restartCount is present Apr 12 00:32:26.703: INFO: Initial restart count of pod liveness-4195675a-087f-42ce-8e79-3b771444960b is 0 Apr 12 00:32:46.744: INFO: Restart count of pod container-probe-2826/liveness-4195675a-087f-42ce-8e79-3b771444960b is now 1 (20.041441849s elapsed) Apr 12 00:33:06.792: INFO: Restart count of pod container-probe-2826/liveness-4195675a-087f-42ce-8e79-3b771444960b is now 2 (40.088927966s elapsed) Apr 12 00:33:26.833: INFO: Restart count of pod container-probe-2826/liveness-4195675a-087f-42ce-8e79-3b771444960b is now 3 (1m0.130568254s elapsed) Apr 12 00:33:46.874: INFO: Restart count of pod container-probe-2826/liveness-4195675a-087f-42ce-8e79-3b771444960b is now 4 (1m20.170875618s elapsed) Apr 12 00:34:57.088: INFO: Restart count of pod container-probe-2826/liveness-4195675a-087f-42ce-8e79-3b771444960b is now 5 (2m30.384785943s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:34:57.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2826" for this suite. • [SLOW TEST:154.544 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:34:57.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:35:30.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1432" for this suite. • [SLOW TEST:33.667 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3594,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:35:30.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:35:31.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:35:33.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248531, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248531, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248531, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248531, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:35:36.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:35:36.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7975" for this suite. STEP: Destroying namespace "webhook-7975-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":212,"skipped":3595,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:35:36.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c9c957f1-5066-4a37-bcf4-db3f084e7b53 STEP: Creating a pod to test consume secrets Apr 12 00:35:36.701: INFO: Waiting up to 5m0s for pod "pod-secrets-907ad973-df77-414d-a121-91506e1857b0" in namespace "secrets-1719" to be "Succeeded or Failed" Apr 12 00:35:36.706: INFO: Pod "pod-secrets-907ad973-df77-414d-a121-91506e1857b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048895ms Apr 12 00:35:38.710: INFO: Pod "pod-secrets-907ad973-df77-414d-a121-91506e1857b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008436587s Apr 12 00:35:40.773: INFO: Pod "pod-secrets-907ad973-df77-414d-a121-91506e1857b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071845796s STEP: Saw pod success Apr 12 00:35:40.773: INFO: Pod "pod-secrets-907ad973-df77-414d-a121-91506e1857b0" satisfied condition "Succeeded or Failed" Apr 12 00:35:40.776: INFO: Trying to get logs from node latest-worker pod pod-secrets-907ad973-df77-414d-a121-91506e1857b0 container secret-volume-test: STEP: delete the pod Apr 12 00:35:40.868: INFO: Waiting for pod pod-secrets-907ad973-df77-414d-a121-91506e1857b0 to disappear Apr 12 00:35:40.892: INFO: Pod pod-secrets-907ad973-df77-414d-a121-91506e1857b0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:35:40.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1719" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:35:40.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:35:40.938: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 12 00:35:43.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 create -f -' Apr 12 00:35:46.761: INFO: stderr: "" Apr 12 00:35:46.761: INFO: stdout: "e2e-test-crd-publish-openapi-5512-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 12 00:35:46.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 delete e2e-test-crd-publish-openapi-5512-crds test-foo' Apr 12 00:35:46.864: INFO: stderr: "" Apr 12 00:35:46.864: INFO: stdout: "e2e-test-crd-publish-openapi-5512-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 12 00:35:46.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 apply -f -' Apr 12 00:35:47.162: INFO: stderr: "" Apr 12 00:35:47.162: INFO: stdout: "e2e-test-crd-publish-openapi-5512-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 12 00:35:47.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 delete e2e-test-crd-publish-openapi-5512-crds test-foo' Apr 12 00:35:47.265: INFO: stderr: "" Apr 12 00:35:47.265: INFO: stdout: "e2e-test-crd-publish-openapi-5512-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 12 00:35:47.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 create -f -' Apr 12 00:35:47.496: INFO: rc: 1 Apr 12 00:35:47.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 apply -f -' Apr 12 00:35:47.756: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 12 00:35:47.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 create -f -' Apr 12 00:35:47.991: INFO: rc: 1 Apr 12 00:35:47.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6559 apply -f -' Apr 12 00:35:48.227: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 12 00:35:48.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5512-crds' Apr 12 00:35:48.449: INFO: stderr: "" Apr 12 00:35:48.449: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 12 00:35:48.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5512-crds.metadata' Apr 12 00:35:48.680: INFO: stderr: "" Apr 12 00:35:48.680: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 12 00:35:48.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5512-crds.spec' Apr 12 00:35:48.934: INFO: stderr: "" Apr 12 00:35:48.934: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 12 00:35:48.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5512-crds.spec.bars' Apr 12 00:35:49.167: INFO: stderr: "" Apr 12 00:35:49.167: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 12 00:35:49.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5512-crds.spec.bars2' Apr 12 00:35:49.429: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:35:52.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6559" for this suite. • [SLOW TEST:11.440 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":214,"skipped":3626,"failed":0} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:35:52.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-be9aa02e-179d-49c1-809e-b7206ce0f1b4 in namespace container-probe-9557 Apr 12 00:35:56.466: INFO: Started pod busybox-be9aa02e-179d-49c1-809e-b7206ce0f1b4 in namespace container-probe-9557 STEP: checking the pod's current state and verifying that restartCount is present Apr 12 00:35:56.469: INFO: Initial restart count of pod busybox-be9aa02e-179d-49c1-809e-b7206ce0f1b4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:39:57.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9557" for this suite. • [SLOW TEST:244.740 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3626,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:39:57.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 12 00:39:57.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 12 00:39:57.212: INFO: stderr: "" Apr 12 00:39:57.212: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:39:57.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2888" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":216,"skipped":3642,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:39:57.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 12 00:39:57.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1713' Apr 12 00:39:57.563: INFO: stderr: "" Apr 12 00:39:57.563: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 12 00:39:58.567: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:39:58.567: INFO: Found 0 / 1 Apr 12 00:39:59.603: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:39:59.603: INFO: Found 0 / 1 Apr 12 00:40:00.568: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:40:00.568: INFO: Found 0 / 1 Apr 12 00:40:01.567: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:40:01.567: INFO: Found 1 / 1 Apr 12 00:40:01.567: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 12 00:40:01.570: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:40:01.570: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 12 00:40:01.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-nm9wl --namespace=kubectl-1713 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 12 00:40:01.707: INFO: stderr: "" Apr 12 00:40:01.708: INFO: stdout: "pod/agnhost-master-nm9wl patched\n" STEP: checking annotations Apr 12 00:40:01.748: INFO: Selector matched 1 pods for map[app:agnhost] Apr 12 00:40:01.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:01.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1713" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":217,"skipped":3643,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:01.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:40:01.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba" in namespace "projected-3186" to be "Succeeded or Failed" Apr 12 00:40:01.853: INFO: Pod "downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.578633ms Apr 12 00:40:03.856: INFO: Pod "downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006834301s Apr 12 00:40:05.860: INFO: Pod "downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0110645s STEP: Saw pod success Apr 12 00:40:05.860: INFO: Pod "downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba" satisfied condition "Succeeded or Failed" Apr 12 00:40:05.863: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba container client-container: STEP: delete the pod Apr 12 00:40:05.898: INFO: Waiting for pod downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba to disappear Apr 12 00:40:05.902: INFO: Pod downwardapi-volume-0d5d4384-9457-4ccd-ad74-48d0e25630ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:05.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3186" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:05.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 12 00:40:14.088: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 12 00:40:14.106: INFO: Pod pod-with-poststart-http-hook still exists Apr 12 00:40:16.106: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 12 00:40:16.119: INFO: Pod pod-with-poststart-http-hook still exists Apr 12 00:40:18.106: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 12 00:40:18.129: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:18.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-770" for this suite. • [SLOW TEST:12.243 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3690,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:18.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 12 00:40:18.240: INFO: Created pod &Pod{ObjectMeta:{dns-5700 dns-5700 /api/v1/namespaces/dns-5700/pods/dns-5700 5a25e461-f459-4fed-8b08-2bc35b8e4f66 7349752 0 2020-04-12 00:40:18 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-66d6s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-66d6s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-66d6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:40:18.254: INFO: The status of Pod dns-5700 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:40:20.267: INFO: The status of Pod dns-5700 is Pending, waiting for it to be Running (with Ready = true) Apr 12 00:40:22.257: INFO: The status of Pod dns-5700 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 12 00:40:22.258: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5700 PodName:dns-5700 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:40:22.258: INFO: >>> kubeConfig: /root/.kube/config I0412 00:40:22.290398 7 log.go:172] (0xc0030a24d0) (0xc00223b900) Create stream I0412 00:40:22.290436 7 log.go:172] (0xc0030a24d0) (0xc00223b900) Stream added, broadcasting: 1 I0412 00:40:22.292412 7 log.go:172] (0xc0030a24d0) Reply frame received for 1 I0412 00:40:22.292454 7 log.go:172] (0xc0030a24d0) (0xc002a1a1e0) Create stream I0412 00:40:22.292470 7 log.go:172] (0xc0030a24d0) (0xc002a1a1e0) Stream added, broadcasting: 3 I0412 00:40:22.293358 7 log.go:172] (0xc0030a24d0) Reply frame received for 3 I0412 00:40:22.293376 7 log.go:172] (0xc0030a24d0) (0xc00223bb80) Create stream I0412 00:40:22.293387 7 log.go:172] (0xc0030a24d0) (0xc00223bb80) Stream added, broadcasting: 5 I0412 00:40:22.294319 7 log.go:172] (0xc0030a24d0) Reply frame received for 5 I0412 00:40:22.390665 7 log.go:172] (0xc0030a24d0) Data frame received for 3 I0412 00:40:22.390703 7 log.go:172] (0xc002a1a1e0) (3) Data frame handling I0412 00:40:22.390724 7 log.go:172] (0xc002a1a1e0) (3) Data frame sent I0412 00:40:22.391229 7 log.go:172] (0xc0030a24d0) Data frame received for 3 I0412 00:40:22.391255 7 log.go:172] (0xc002a1a1e0) (3) Data frame handling I0412 00:40:22.391497 7 log.go:172] (0xc0030a24d0) Data frame received for 5 I0412 00:40:22.391533 7 log.go:172] (0xc00223bb80) (5) Data frame handling I0412 00:40:22.393563 7 log.go:172] (0xc0030a24d0) Data frame received for 1 I0412 00:40:22.393609 7 log.go:172] (0xc00223b900) (1) Data frame handling I0412 00:40:22.393653 7 log.go:172] (0xc00223b900) (1) Data frame sent I0412 00:40:22.393675 7 log.go:172] (0xc0030a24d0) (0xc00223b900) Stream removed, broadcasting: 1 I0412 00:40:22.393694 7 log.go:172] (0xc0030a24d0) Go away received I0412 00:40:22.393849 7 log.go:172] (0xc0030a24d0) (0xc00223b900) Stream removed, broadcasting: 1 I0412 00:40:22.393880 7 log.go:172] (0xc0030a24d0) (0xc002a1a1e0) Stream removed, broadcasting: 3 I0412 00:40:22.393902 7 log.go:172] (0xc0030a24d0) (0xc00223bb80) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 12 00:40:22.393: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5700 PodName:dns-5700 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:40:22.393: INFO: >>> kubeConfig: /root/.kube/config I0412 00:40:22.451795 7 log.go:172] (0xc0030a2b00) (0xc00223bea0) Create stream I0412 00:40:22.451842 7 log.go:172] (0xc0030a2b00) (0xc00223bea0) Stream added, broadcasting: 1 I0412 00:40:22.454226 7 log.go:172] (0xc0030a2b00) Reply frame received for 1 I0412 00:40:22.454269 7 log.go:172] (0xc0030a2b00) (0xc0019be0a0) Create stream I0412 00:40:22.454284 7 log.go:172] (0xc0030a2b00) (0xc0019be0a0) Stream added, broadcasting: 3 I0412 00:40:22.455252 7 log.go:172] (0xc0030a2b00) Reply frame received for 3 I0412 00:40:22.455300 7 log.go:172] (0xc0030a2b00) (0xc0019be1e0) Create stream I0412 00:40:22.455322 7 log.go:172] (0xc0030a2b00) (0xc0019be1e0) Stream added, broadcasting: 5 I0412 00:40:22.456167 7 log.go:172] (0xc0030a2b00) Reply frame received for 5 I0412 00:40:22.524117 7 log.go:172] (0xc0030a2b00) Data frame received for 3 I0412 00:40:22.524148 7 log.go:172] (0xc0019be0a0) (3) Data frame handling I0412 00:40:22.524165 7 log.go:172] (0xc0019be0a0) (3) Data frame sent I0412 00:40:22.525085 7 log.go:172] (0xc0030a2b00) Data frame received for 5 I0412 00:40:22.525244 7 log.go:172] (0xc0019be1e0) (5) Data frame handling I0412 00:40:22.525327 7 log.go:172] (0xc0030a2b00) Data frame received for 3 I0412 00:40:22.525446 7 log.go:172] (0xc0019be0a0) (3) Data frame handling I0412 00:40:22.527846 7 log.go:172] (0xc0030a2b00) Data frame received for 1 I0412 00:40:22.527860 7 log.go:172] (0xc00223bea0) (1) Data frame handling I0412 00:40:22.527867 7 log.go:172] (0xc00223bea0) (1) Data frame sent I0412 00:40:22.527876 7 log.go:172] (0xc0030a2b00) (0xc00223bea0) Stream removed, broadcasting: 1 I0412 00:40:22.527928 7 log.go:172] (0xc0030a2b00) Go away received I0412 00:40:22.527968 7 log.go:172] (0xc0030a2b00) (0xc00223bea0) Stream removed, broadcasting: 1 I0412 00:40:22.527991 7 log.go:172] (0xc0030a2b00) (0xc0019be0a0) Stream removed, broadcasting: 3 I0412 00:40:22.527997 7 log.go:172] (0xc0030a2b00) (0xc0019be1e0) Stream removed, broadcasting: 5 Apr 12 00:40:22.528: INFO: Deleting pod dns-5700... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:22.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5700" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":220,"skipped":3693,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:22.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:40:23.356: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:40:25.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248823, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248823, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:40:28.394: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:38.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8579" for this suite. STEP: Destroying namespace "webhook-8579-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.109 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":221,"skipped":3693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:38.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 12 00:40:38.735: INFO: Waiting up to 5m0s for pod "client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53" in namespace "containers-3281" to be "Succeeded or Failed" Apr 12 00:40:38.739: INFO: Pod "client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190353ms Apr 12 00:40:40.744: INFO: Pod "client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008324517s Apr 12 00:40:42.748: INFO: Pod "client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012392122s STEP: Saw pod success Apr 12 00:40:42.748: INFO: Pod "client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53" satisfied condition "Succeeded or Failed" Apr 12 00:40:42.750: INFO: Trying to get logs from node latest-worker2 pod client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53 container test-container: STEP: delete the pod Apr 12 00:40:42.795: INFO: Waiting for pod client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53 to disappear Apr 12 00:40:42.800: INFO: Pod client-containers-64b43ab3-734c-4b75-90e5-de5786af9e53 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:40:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3281" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3718,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:40:42.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 12 00:40:42.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:42.910: INFO: Number of nodes with available pods: 0 Apr 12 00:40:42.910: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:40:43.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:43.917: INFO: Number of nodes with available pods: 0 Apr 12 00:40:43.917: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:40:44.913: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:44.916: INFO: Number of nodes with available pods: 0 Apr 12 00:40:44.916: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:40:45.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:45.918: INFO: Number of nodes with available pods: 0 Apr 12 00:40:45.918: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:40:46.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:46.917: INFO: Number of nodes with available pods: 1 Apr 12 00:40:46.917: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:40:47.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:47.934: INFO: Number of nodes with available pods: 2 Apr 12 00:40:47.934: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 12 00:40:47.973: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:47.977: INFO: Number of nodes with available pods: 1 Apr 12 00:40:47.977: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:48.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:48.984: INFO: Number of nodes with available pods: 1 Apr 12 00:40:48.984: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:49.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:49.986: INFO: Number of nodes with available pods: 1 Apr 12 00:40:49.986: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:50.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:50.985: INFO: Number of nodes with available pods: 1 Apr 12 00:40:50.985: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:51.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:51.985: INFO: Number of nodes with available pods: 1 Apr 12 00:40:51.985: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:52.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:52.988: INFO: Number of nodes with available pods: 1 Apr 12 00:40:52.988: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:53.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:53.985: INFO: Number of nodes with available pods: 1 Apr 12 00:40:53.985: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:54.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:54.985: INFO: Number of nodes with available pods: 1 Apr 12 00:40:54.985: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:55.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:55.984: INFO: Number of nodes with available pods: 1 Apr 12 00:40:55.984: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:40:56.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:40:56.985: INFO: Number of nodes with available pods: 2 Apr 12 00:40:56.985: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1205, will wait for the garbage collector to delete the pods Apr 12 00:40:57.047: INFO: Deleting DaemonSet.extensions daemon-set took: 6.034489ms Apr 12 00:40:57.348: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.256924ms Apr 12 00:41:03.052: INFO: Number of nodes with available pods: 0 Apr 12 00:41:03.052: INFO: Number of running nodes: 0, number of available pods: 0 Apr 12 00:41:03.054: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1205/daemonsets","resourceVersion":"7350073"},"items":null} Apr 12 00:41:03.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1205/pods","resourceVersion":"7350073"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:03.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1205" for this suite. • [SLOW TEST:20.266 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":223,"skipped":3724,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:03.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:41:03.804: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:41:05.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248863, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248863, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248863, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722248863, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:41:08.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:08.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-66" for this suite. STEP: Destroying namespace "webhook-66-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.959 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":224,"skipped":3724,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:09.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 12 00:41:09.338: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350166 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:41:09.339: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350167 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:41:09.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350168 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 12 00:41:19.387: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350215 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:41:19.388: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350216 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 12 00:41:19.388: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 /api/v1/namespaces/watch-7481/configmaps/e2e-watch-test-label-changed c212b8db-e174-4ff6-91f5-edcf5456d16a 7350217 0 2020-04-12 00:41:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:19.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7481" for this suite. • [SLOW TEST:10.375 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":225,"skipped":3746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:19.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:41:19.480: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:20.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9695" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":226,"skipped":3772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:20.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:41:20.215: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 12 00:41:22.272: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:23.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1301" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":227,"skipped":3812,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:23.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-308701ac-b52b-439b-a5b8-83c7b98a2002 STEP: Creating a pod to test consume secrets Apr 12 00:41:23.511: INFO: Waiting up to 5m0s for pod "pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75" in namespace "secrets-1736" to be "Succeeded or Failed" Apr 12 00:41:23.581: INFO: Pod "pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75": Phase="Pending", Reason="", readiness=false. Elapsed: 69.830099ms Apr 12 00:41:25.604: INFO: Pod "pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092492211s Apr 12 00:41:27.608: INFO: Pod "pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096407207s STEP: Saw pod success Apr 12 00:41:27.608: INFO: Pod "pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75" satisfied condition "Succeeded or Failed" Apr 12 00:41:27.611: INFO: Trying to get logs from node latest-worker pod pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75 container secret-env-test: STEP: delete the pod Apr 12 00:41:27.672: INFO: Waiting for pod pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75 to disappear Apr 12 00:41:27.699: INFO: Pod pod-secrets-449ed5c7-f2f4-4063-94c0-d044dc714a75 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:27.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1736" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3815,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:27.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:31.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7341" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:31.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-c28f6042-8c17-413e-bd32-d0bcb1eb9a08 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c28f6042-8c17-413e-bd32-d0bcb1eb9a08 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:40.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7169" for this suite. • [SLOW TEST:8.223 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3882,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:40.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4828 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4828 I0412 00:41:40.275457 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4828, replica count: 2 I0412 00:41:43.325948 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0412 00:41:46.326212 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 12 00:41:46.326: INFO: Creating new exec pod Apr 12 00:41:51.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4828 execpod7dtmn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 12 00:41:51.605: INFO: stderr: "I0412 00:41:51.512893 3073 log.go:172] (0xc00003a160) (0xc0009180a0) Create stream\nI0412 00:41:51.512980 3073 log.go:172] (0xc00003a160) (0xc0009180a0) Stream added, broadcasting: 1\nI0412 00:41:51.516505 3073 log.go:172] (0xc00003a160) Reply frame received for 1\nI0412 00:41:51.516559 3073 log.go:172] (0xc00003a160) (0xc0009c2000) Create stream\nI0412 00:41:51.516573 3073 log.go:172] (0xc00003a160) (0xc0009c2000) Stream added, broadcasting: 3\nI0412 00:41:51.517705 3073 log.go:172] (0xc00003a160) Reply frame received for 3\nI0412 00:41:51.517751 3073 log.go:172] (0xc00003a160) (0xc000232a00) Create stream\nI0412 00:41:51.517768 3073 log.go:172] (0xc00003a160) (0xc000232a00) Stream added, broadcasting: 5\nI0412 00:41:51.518915 3073 log.go:172] (0xc00003a160) Reply frame received for 5\nI0412 00:41:51.597714 3073 log.go:172] (0xc00003a160) Data frame received for 5\nI0412 00:41:51.597756 3073 log.go:172] (0xc000232a00) (5) Data frame handling\nI0412 00:41:51.597795 3073 log.go:172] (0xc000232a00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0412 00:41:51.598230 3073 log.go:172] (0xc00003a160) Data frame received for 5\nI0412 00:41:51.598257 3073 log.go:172] (0xc000232a00) (5) Data frame handling\nI0412 00:41:51.598276 3073 log.go:172] (0xc000232a00) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0412 00:41:51.598727 3073 log.go:172] (0xc00003a160) Data frame received for 3\nI0412 00:41:51.598768 3073 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0412 00:41:51.598804 3073 log.go:172] (0xc00003a160) Data frame received for 5\nI0412 00:41:51.598827 3073 log.go:172] (0xc000232a00) (5) Data frame handling\nI0412 00:41:51.600727 3073 log.go:172] (0xc00003a160) Data frame received for 1\nI0412 00:41:51.600745 3073 log.go:172] (0xc0009180a0) (1) Data frame handling\nI0412 00:41:51.600755 3073 log.go:172] (0xc0009180a0) (1) Data frame sent\nI0412 00:41:51.600766 3073 log.go:172] (0xc00003a160) (0xc0009180a0) Stream removed, broadcasting: 1\nI0412 00:41:51.600782 3073 log.go:172] (0xc00003a160) Go away received\nI0412 00:41:51.601232 3073 log.go:172] (0xc00003a160) (0xc0009180a0) Stream removed, broadcasting: 1\nI0412 00:41:51.601259 3073 log.go:172] (0xc00003a160) (0xc0009c2000) Stream removed, broadcasting: 3\nI0412 00:41:51.601267 3073 log.go:172] (0xc00003a160) (0xc000232a00) Stream removed, broadcasting: 5\n" Apr 12 00:41:51.606: INFO: stdout: "" Apr 12 00:41:51.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4828 execpod7dtmn -- /bin/sh -x -c nc -zv -t -w 2 10.96.245.142 80' Apr 12 00:41:51.817: INFO: stderr: "I0412 00:41:51.735646 3096 log.go:172] (0xc0009a00b0) (0xc000849360) Create stream\nI0412 00:41:51.735720 3096 log.go:172] (0xc0009a00b0) (0xc000849360) Stream added, broadcasting: 1\nI0412 00:41:51.739081 3096 log.go:172] (0xc0009a00b0) Reply frame received for 1\nI0412 00:41:51.739121 3096 log.go:172] (0xc0009a00b0) (0xc0008495e0) Create stream\nI0412 00:41:51.739133 3096 log.go:172] (0xc0009a00b0) (0xc0008495e0) Stream added, broadcasting: 3\nI0412 00:41:51.740154 3096 log.go:172] (0xc0009a00b0) Reply frame received for 3\nI0412 00:41:51.740199 3096 log.go:172] (0xc0009a00b0) (0xc000ad8000) Create stream\nI0412 00:41:51.740212 3096 log.go:172] (0xc0009a00b0) (0xc000ad8000) Stream added, broadcasting: 5\nI0412 00:41:51.741451 3096 log.go:172] (0xc0009a00b0) Reply frame received for 5\nI0412 00:41:51.810173 3096 log.go:172] (0xc0009a00b0) Data frame received for 5\nI0412 00:41:51.810203 3096 log.go:172] (0xc000ad8000) (5) Data frame handling\nI0412 00:41:51.810224 3096 log.go:172] (0xc000ad8000) (5) Data frame sent\nI0412 00:41:51.810236 3096 log.go:172] (0xc0009a00b0) Data frame received for 5\nI0412 00:41:51.810265 3096 log.go:172] (0xc000ad8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.245.142 80\nConnection to 10.96.245.142 80 port [tcp/http] succeeded!\nI0412 00:41:51.810311 3096 log.go:172] (0xc0009a00b0) Data frame received for 3\nI0412 00:41:51.810340 3096 log.go:172] (0xc0008495e0) (3) Data frame handling\nI0412 00:41:51.811801 3096 log.go:172] (0xc0009a00b0) Data frame received for 1\nI0412 00:41:51.811834 3096 log.go:172] (0xc000849360) (1) Data frame handling\nI0412 00:41:51.811867 3096 log.go:172] (0xc000849360) (1) Data frame sent\nI0412 00:41:51.812172 3096 log.go:172] (0xc0009a00b0) (0xc000849360) Stream removed, broadcasting: 1\nI0412 00:41:51.812221 3096 log.go:172] (0xc0009a00b0) Go away received\nI0412 00:41:51.812613 3096 log.go:172] (0xc0009a00b0) (0xc000849360) Stream removed, broadcasting: 1\nI0412 00:41:51.812632 3096 log.go:172] (0xc0009a00b0) (0xc0008495e0) Stream removed, broadcasting: 3\nI0412 00:41:51.812641 3096 log.go:172] (0xc0009a00b0) (0xc000ad8000) Stream removed, broadcasting: 5\n" Apr 12 00:41:51.817: INFO: stdout: "" Apr 12 00:41:51.817: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:51.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4828" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.759 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":231,"skipped":3902,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:51.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-dda03e6b-575b-4b07-b743-f69dd38b8e9d STEP: Creating a pod to test consume configMaps Apr 12 00:41:51.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c" in namespace "projected-9864" to be "Succeeded or Failed" Apr 12 00:41:51.964: INFO: Pod "pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.833296ms Apr 12 00:41:53.968: INFO: Pod "pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02366996s Apr 12 00:41:55.979: INFO: Pod "pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034686372s STEP: Saw pod success Apr 12 00:41:55.979: INFO: Pod "pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c" satisfied condition "Succeeded or Failed" Apr 12 00:41:55.981: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c container projected-configmap-volume-test: STEP: delete the pod Apr 12 00:41:55.997: INFO: Waiting for pod pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c to disappear Apr 12 00:41:56.002: INFO: Pod pod-projected-configmaps-14d62410-b156-4f2a-9eb6-d6f163e4483c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:41:56.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9864" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3904,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:41:56.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-35bd8811-fa71-4aaa-a083-ea19932e350e STEP: Creating a pod to test consume secrets Apr 12 00:41:56.112: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29" in namespace "projected-8669" to be "Succeeded or Failed" Apr 12 00:41:56.125: INFO: Pod "pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29": Phase="Pending", Reason="", readiness=false. Elapsed: 12.928595ms Apr 12 00:41:58.129: INFO: Pod "pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01739941s Apr 12 00:42:00.133: INFO: Pod "pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021464691s STEP: Saw pod success Apr 12 00:42:00.133: INFO: Pod "pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29" satisfied condition "Succeeded or Failed" Apr 12 00:42:00.137: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29 container secret-volume-test: STEP: delete the pod Apr 12 00:42:00.167: INFO: Waiting for pod pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29 to disappear Apr 12 00:42:00.179: INFO: Pod pod-projected-secrets-5805a5ab-5874-43f9-a11d-92ba0ddbba29 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:42:00.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8669" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3911,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:42:00.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-8c81b4cd-20f2-45ca-a694-f1a3dd0749eb STEP: Creating a pod to test consume configMaps Apr 12 00:42:00.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e" in namespace "configmap-4047" to be "Succeeded or Failed" Apr 12 00:42:00.278: INFO: Pod "pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.352169ms Apr 12 00:42:02.282: INFO: Pod "pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017560387s Apr 12 00:42:04.286: INFO: Pod "pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02114498s STEP: Saw pod success Apr 12 00:42:04.286: INFO: Pod "pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e" satisfied condition "Succeeded or Failed" Apr 12 00:42:04.288: INFO: Trying to get logs from node latest-worker pod pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e container configmap-volume-test: STEP: delete the pod Apr 12 00:42:04.349: INFO: Waiting for pod pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e to disappear Apr 12 00:42:04.407: INFO: Pod pod-configmaps-41654284-f11c-4f0c-b61d-b5d3df7c169e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:42:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4047" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3922,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:42:04.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:42:08.515: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.518: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.521: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.524: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.533: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.536: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.538: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.541: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:08.547: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:13.551: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.554: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.557: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.560: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.569: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.571: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.574: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.577: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:13.608: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:18.551: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.554: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.557: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.559: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.567: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.574: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.578: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.580: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:18.585: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:23.551: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.557: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.559: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.561: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.568: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.570: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.572: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.575: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:23.581: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:28.551: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.555: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.558: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.560: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.568: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.571: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.573: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.576: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:28.581: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:33.552: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.556: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.559: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.562: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.572: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.575: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.578: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.581: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local from pod dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98: the server could not find the requested resource (get pods dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98) Apr 12 00:42:33.588: INFO: Lookups using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4710.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4710.svc.cluster.local jessie_udp@dns-test-service-2.dns-4710.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4710.svc.cluster.local] Apr 12 00:42:38.588: INFO: DNS probes using dns-4710/dns-test-2d65ecfc-f0f4-4d31-a8d7-e63eea890a98 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:42:38.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4710" for this suite. • [SLOW TEST:34.342 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":235,"skipped":3922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:42:38.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:42:55.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4767" for this suite. • [SLOW TEST:16.558 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":236,"skipped":3945,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:42:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 12 00:42:55.411: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:55.416: INFO: Number of nodes with available pods: 0 Apr 12 00:42:55.416: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:42:56.420: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:56.422: INFO: Number of nodes with available pods: 0 Apr 12 00:42:56.422: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:42:57.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:57.495: INFO: Number of nodes with available pods: 0 Apr 12 00:42:57.495: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:42:58.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:58.442: INFO: Number of nodes with available pods: 1 Apr 12 00:42:58.442: INFO: Node latest-worker2 is running more than one daemon pod Apr 12 00:42:59.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:59.425: INFO: Number of nodes with available pods: 2 Apr 12 00:42:59.425: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 12 00:42:59.455: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:42:59.486: INFO: Number of nodes with available pods: 2 Apr 12 00:42:59.486: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8756, will wait for the garbage collector to delete the pods Apr 12 00:43:00.701: INFO: Deleting DaemonSet.extensions daemon-set took: 9.314987ms Apr 12 00:43:01.002: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.245664ms Apr 12 00:43:04.426: INFO: Number of nodes with available pods: 0 Apr 12 00:43:04.426: INFO: Number of running nodes: 0, number of available pods: 0 Apr 12 00:43:04.430: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8756/daemonsets","resourceVersion":"7351001"},"items":null} Apr 12 00:43:04.433: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8756/pods","resourceVersion":"7351001"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:04.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8756" for this suite. • [SLOW TEST:9.135 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":237,"skipped":3954,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:04.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:20.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5654" for this suite. • [SLOW TEST:16.363 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":238,"skipped":3974,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:20.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:43:20.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639" in namespace "projected-8457" to be "Succeeded or Failed" Apr 12 00:43:20.879: INFO: Pod "downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306167ms Apr 12 00:43:22.882: INFO: Pod "downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006689555s Apr 12 00:43:24.887: INFO: Pod "downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01109543s STEP: Saw pod success Apr 12 00:43:24.887: INFO: Pod "downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639" satisfied condition "Succeeded or Failed" Apr 12 00:43:24.890: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639 container client-container: STEP: delete the pod Apr 12 00:43:24.935: INFO: Waiting for pod downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639 to disappear Apr 12 00:43:24.947: INFO: Pod downwardapi-volume-6f74b722-599a-4368-ab76-a573941fd639 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:24.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8457" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":3974,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:24.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:43:25.067: INFO: Create a RollingUpdate DaemonSet Apr 12 00:43:25.070: INFO: Check that daemon pods launch on every node of the cluster Apr 12 00:43:25.073: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:25.095: INFO: Number of nodes with available pods: 0 Apr 12 00:43:25.095: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:43:26.139: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:26.143: INFO: Number of nodes with available pods: 0 Apr 12 00:43:26.143: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:43:27.188: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:27.190: INFO: Number of nodes with available pods: 0 Apr 12 00:43:27.190: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:43:28.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:28.103: INFO: Number of nodes with available pods: 0 Apr 12 00:43:28.103: INFO: Node latest-worker is running more than one daemon pod Apr 12 00:43:29.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:29.103: INFO: Number of nodes with available pods: 2 Apr 12 00:43:29.103: INFO: Number of running nodes: 2, number of available pods: 2 Apr 12 00:43:29.103: INFO: Update the DaemonSet to trigger a rollout Apr 12 00:43:29.110: INFO: Updating DaemonSet daemon-set Apr 12 00:43:43.204: INFO: Roll back the DaemonSet before rollout is complete Apr 12 00:43:43.210: INFO: Updating DaemonSet daemon-set Apr 12 00:43:43.210: INFO: Make sure DaemonSet rollback is complete Apr 12 00:43:43.214: INFO: Wrong image for pod: daemon-set-bq9hm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 12 00:43:43.214: INFO: Pod daemon-set-bq9hm is not available Apr 12 00:43:43.233: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:44.237: INFO: Wrong image for pod: daemon-set-bq9hm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 12 00:43:44.237: INFO: Pod daemon-set-bq9hm is not available Apr 12 00:43:44.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:45.263: INFO: Wrong image for pod: daemon-set-bq9hm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 12 00:43:45.263: INFO: Pod daemon-set-bq9hm is not available Apr 12 00:43:45.266: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 12 00:43:46.238: INFO: Pod daemon-set-xc8vz is not available Apr 12 00:43:46.243: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6400, will wait for the garbage collector to delete the pods Apr 12 00:43:46.307: INFO: Deleting DaemonSet.extensions daemon-set took: 6.691058ms Apr 12 00:43:46.608: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.112732ms Apr 12 00:43:49.619: INFO: Number of nodes with available pods: 0 Apr 12 00:43:49.619: INFO: Number of running nodes: 0, number of available pods: 0 Apr 12 00:43:49.621: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6400/daemonsets","resourceVersion":"7351304"},"items":null} Apr 12 00:43:49.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6400/pods","resourceVersion":"7351305"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:49.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6400" for this suite. • [SLOW TEST:24.687 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":240,"skipped":3974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:49.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:43:49.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162" in namespace "downward-api-6874" to be "Succeeded or Failed" Apr 12 00:43:49.731: INFO: Pod "downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162": Phase="Pending", Reason="", readiness=false. Elapsed: 27.100752ms Apr 12 00:43:51.746: INFO: Pod "downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042225909s Apr 12 00:43:53.751: INFO: Pod "downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047013959s STEP: Saw pod success Apr 12 00:43:53.751: INFO: Pod "downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162" satisfied condition "Succeeded or Failed" Apr 12 00:43:53.755: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162 container client-container: STEP: delete the pod Apr 12 00:43:53.829: INFO: Waiting for pod downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162 to disappear Apr 12 00:43:53.834: INFO: Pod downwardapi-volume-0ada66c0-a175-4d46-afc7-c3a3c88ff162 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:53.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6874" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:53.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 12 00:43:53.890: INFO: Waiting up to 5m0s for pod "pod-2480e371-6d37-4b46-8b6f-e0c7589016df" in namespace "emptydir-1764" to be "Succeeded or Failed" Apr 12 00:43:53.894: INFO: Pod "pod-2480e371-6d37-4b46-8b6f-e0c7589016df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147536ms Apr 12 00:43:55.954: INFO: Pod "pod-2480e371-6d37-4b46-8b6f-e0c7589016df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063534677s Apr 12 00:43:57.958: INFO: Pod "pod-2480e371-6d37-4b46-8b6f-e0c7589016df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067703768s STEP: Saw pod success Apr 12 00:43:57.958: INFO: Pod "pod-2480e371-6d37-4b46-8b6f-e0c7589016df" satisfied condition "Succeeded or Failed" Apr 12 00:43:57.961: INFO: Trying to get logs from node latest-worker pod pod-2480e371-6d37-4b46-8b6f-e0c7589016df container test-container: STEP: delete the pod Apr 12 00:43:58.002: INFO: Waiting for pod pod-2480e371-6d37-4b46-8b6f-e0c7589016df to disappear Apr 12 00:43:58.011: INFO: Pod pod-2480e371-6d37-4b46-8b6f-e0c7589016df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:43:58.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1764" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4072,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:43:58.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:43:58.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec" in namespace "projected-1472" to be "Succeeded or Failed" Apr 12 00:43:58.096: INFO: Pod "downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.672957ms Apr 12 00:44:00.101: INFO: Pod "downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023620688s Apr 12 00:44:02.106: INFO: Pod "downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02825047s STEP: Saw pod success Apr 12 00:44:02.106: INFO: Pod "downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec" satisfied condition "Succeeded or Failed" Apr 12 00:44:02.110: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec container client-container: STEP: delete the pod Apr 12 00:44:02.139: INFO: Waiting for pod downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec to disappear Apr 12 00:44:02.150: INFO: Pod downwardapi-volume-81bacfd6-de6f-46ae-95ff-c47c22f3eaec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:44:02.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1472" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:44:02.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:44:02.245: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 12 00:44:07.248: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 12 00:44:07.248: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 12 00:44:11.357: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2024 /apis/apps/v1/namespaces/deployment-2024/deployments/test-cleanup-deployment eb8fb616-389f-406b-8c16-96a34c8c6c01 7351513 1 2020-04-12 00:44:07 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047a7228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-12 00:44:07 +0000 UTC,LastTransitionTime:2020-04-12 00:44:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-12 00:44:10 +0000 UTC,LastTransitionTime:2020-04-12 00:44:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 12 00:44:11.359: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-2024 /apis/apps/v1/namespaces/deployment-2024/replicasets/test-cleanup-deployment-577c77b589 0c37849d-b9c1-40fe-8bb7-e8cebdcb0008 7351501 1 2020-04-12 00:44:07 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment eb8fb616-389f-406b-8c16-96a34c8c6c01 0xc0047a7687 0xc0047a7688}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047a76f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:44:11.363: INFO: Pod "test-cleanup-deployment-577c77b589-vkxsc" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-vkxsc test-cleanup-deployment-577c77b589- deployment-2024 /api/v1/namespaces/deployment-2024/pods/test-cleanup-deployment-577c77b589-vkxsc 806b4864-f0bc-4bf8-9df4-aa41fc6eb0ed 7351500 0 2020-04-12 00:44:07 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 0c37849d-b9c1-40fe-8bb7-e8cebdcb0008 0xc004775f27 0xc004775f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jkkrf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jkkrf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jkkrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:44:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:44:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:44:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:44:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.155,StartTime:2020-04-12 00:44:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:44:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b96bbf61e34c29b00479d057b1a4bdc698a771910612303bf728af9569956da7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:44:11.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2024" for this suite. • [SLOW TEST:9.211 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":244,"skipped":4099,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:44:11.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-1b63a6c5-144b-4453-9054-01a806bb2d5f in namespace container-probe-9554 Apr 12 00:44:15.452: INFO: Started pod busybox-1b63a6c5-144b-4453-9054-01a806bb2d5f in namespace container-probe-9554 STEP: checking the pod's current state and verifying that restartCount is present Apr 12 00:44:15.455: INFO: Initial restart count of pod busybox-1b63a6c5-144b-4453-9054-01a806bb2d5f is 0 Apr 12 00:45:07.569: INFO: Restart count of pod container-probe-9554/busybox-1b63a6c5-144b-4453-9054-01a806bb2d5f is now 1 (52.113749315s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:07.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9554" for this suite. • [SLOW TEST:56.263 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4102,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:07.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-d43f4987-1b50-4615-ba6d-c892e01cf60b STEP: Creating secret with name secret-projected-all-test-volume-abe9576a-aaec-4a22-9332-222445223059 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 12 00:45:07.759: INFO: Waiting up to 5m0s for pod "projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e" in namespace "projected-5013" to be "Succeeded or Failed" Apr 12 00:45:07.764: INFO: Pod "projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.075733ms Apr 12 00:45:09.776: INFO: Pod "projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017086465s Apr 12 00:45:11.780: INFO: Pod "projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021368088s STEP: Saw pod success Apr 12 00:45:11.780: INFO: Pod "projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e" satisfied condition "Succeeded or Failed" Apr 12 00:45:11.783: INFO: Trying to get logs from node latest-worker2 pod projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e container projected-all-volume-test: STEP: delete the pod Apr 12 00:45:11.808: INFO: Waiting for pod projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e to disappear Apr 12 00:45:11.820: INFO: Pod projected-volume-dea069d4-7a94-4a42-9745-942fa259df2e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:11.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5013" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:11.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-dbf40b95-62b9-4de0-a2c1-3591666c8b1b STEP: Creating a pod to test consume secrets Apr 12 00:45:11.913: INFO: Waiting up to 5m0s for pod "pod-secrets-22805a39-4054-4253-8739-a8bc637e7184" in namespace "secrets-9571" to be "Succeeded or Failed" Apr 12 00:45:11.954: INFO: Pod "pod-secrets-22805a39-4054-4253-8739-a8bc637e7184": Phase="Pending", Reason="", readiness=false. Elapsed: 40.648372ms Apr 12 00:45:13.958: INFO: Pod "pod-secrets-22805a39-4054-4253-8739-a8bc637e7184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044921122s Apr 12 00:45:15.962: INFO: Pod "pod-secrets-22805a39-4054-4253-8739-a8bc637e7184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049050352s STEP: Saw pod success Apr 12 00:45:15.962: INFO: Pod "pod-secrets-22805a39-4054-4253-8739-a8bc637e7184" satisfied condition "Succeeded or Failed" Apr 12 00:45:15.966: INFO: Trying to get logs from node latest-worker pod pod-secrets-22805a39-4054-4253-8739-a8bc637e7184 container secret-volume-test: STEP: delete the pod Apr 12 00:45:15.996: INFO: Waiting for pod pod-secrets-22805a39-4054-4253-8739-a8bc637e7184 to disappear Apr 12 00:45:16.007: INFO: Pod pod-secrets-22805a39-4054-4253-8739-a8bc637e7184 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:16.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9571" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:16.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 12 00:45:16.451: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 12 00:45:18.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 12 00:45:21.493: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:21.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9886" for this suite. STEP: Destroying namespace "webhook-9886-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.714 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":248,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:21.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:32.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2454" for this suite. • [SLOW TEST:11.199 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":249,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:32.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 12 00:45:43.041: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.041: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.070368 7 log.go:172] (0xc002dc6420) (0xc00223a0a0) Create stream I0412 00:45:43.070399 7 log.go:172] (0xc002dc6420) (0xc00223a0a0) Stream added, broadcasting: 1 I0412 00:45:43.072847 7 log.go:172] (0xc002dc6420) Reply frame received for 1 I0412 00:45:43.072897 7 log.go:172] (0xc002dc6420) (0xc001425900) Create stream I0412 00:45:43.072915 7 log.go:172] (0xc002dc6420) (0xc001425900) Stream added, broadcasting: 3 I0412 00:45:43.074172 7 log.go:172] (0xc002dc6420) Reply frame received for 3 I0412 00:45:43.074219 7 log.go:172] (0xc002dc6420) (0xc001425a40) Create stream I0412 00:45:43.074236 7 log.go:172] (0xc002dc6420) (0xc001425a40) Stream added, broadcasting: 5 I0412 00:45:43.075297 7 log.go:172] (0xc002dc6420) Reply frame received for 5 I0412 00:45:43.161752 7 log.go:172] (0xc002dc6420) Data frame received for 3 I0412 00:45:43.161800 7 log.go:172] (0xc001425900) (3) Data frame handling I0412 00:45:43.161814 7 log.go:172] (0xc001425900) (3) Data frame sent I0412 00:45:43.161833 7 log.go:172] (0xc002dc6420) Data frame received for 3 I0412 00:45:43.161848 7 log.go:172] (0xc001425900) (3) Data frame handling I0412 00:45:43.161898 7 log.go:172] (0xc002dc6420) Data frame received for 5 I0412 00:45:43.161931 7 log.go:172] (0xc001425a40) (5) Data frame handling I0412 00:45:43.163293 7 log.go:172] (0xc002dc6420) Data frame received for 1 I0412 00:45:43.163330 7 log.go:172] (0xc00223a0a0) (1) Data frame handling I0412 00:45:43.163363 7 log.go:172] (0xc00223a0a0) (1) Data frame sent I0412 00:45:43.163391 7 log.go:172] (0xc002dc6420) (0xc00223a0a0) Stream removed, broadcasting: 1 I0412 00:45:43.163497 7 log.go:172] (0xc002dc6420) Go away received I0412 00:45:43.163552 7 log.go:172] (0xc002dc6420) (0xc00223a0a0) Stream removed, broadcasting: 1 I0412 00:45:43.163586 7 log.go:172] (0xc002dc6420) (0xc001425900) Stream removed, broadcasting: 3 I0412 00:45:43.163600 7 log.go:172] (0xc002dc6420) (0xc001425a40) Stream removed, broadcasting: 5 Apr 12 00:45:43.163: INFO: Exec stderr: "" Apr 12 00:45:43.163: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.163: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.195916 7 log.go:172] (0xc002db24d0) (0xc001227c20) Create stream I0412 00:45:43.195949 7 log.go:172] (0xc002db24d0) (0xc001227c20) Stream added, broadcasting: 1 I0412 00:45:43.198750 7 log.go:172] (0xc002db24d0) Reply frame received for 1 I0412 00:45:43.198802 7 log.go:172] (0xc002db24d0) (0xc001f30000) Create stream I0412 00:45:43.198817 7 log.go:172] (0xc002db24d0) (0xc001f30000) Stream added, broadcasting: 3 I0412 00:45:43.199878 7 log.go:172] (0xc002db24d0) Reply frame received for 3 I0412 00:45:43.199907 7 log.go:172] (0xc002db24d0) (0xc001227cc0) Create stream I0412 00:45:43.199922 7 log.go:172] (0xc002db24d0) (0xc001227cc0) Stream added, broadcasting: 5 I0412 00:45:43.200786 7 log.go:172] (0xc002db24d0) Reply frame received for 5 I0412 00:45:43.265738 7 log.go:172] (0xc002db24d0) Data frame received for 5 I0412 00:45:43.265766 7 log.go:172] (0xc001227cc0) (5) Data frame handling I0412 00:45:43.265800 7 log.go:172] (0xc002db24d0) Data frame received for 3 I0412 00:45:43.265817 7 log.go:172] (0xc001f30000) (3) Data frame handling I0412 00:45:43.265840 7 log.go:172] (0xc001f30000) (3) Data frame sent I0412 00:45:43.265855 7 log.go:172] (0xc002db24d0) Data frame received for 3 I0412 00:45:43.265867 7 log.go:172] (0xc001f30000) (3) Data frame handling I0412 00:45:43.266954 7 log.go:172] (0xc002db24d0) Data frame received for 1 I0412 00:45:43.266979 7 log.go:172] (0xc001227c20) (1) Data frame handling I0412 00:45:43.266989 7 log.go:172] (0xc001227c20) (1) Data frame sent I0412 00:45:43.267000 7 log.go:172] (0xc002db24d0) (0xc001227c20) Stream removed, broadcasting: 1 I0412 00:45:43.267016 7 log.go:172] (0xc002db24d0) Go away received I0412 00:45:43.267107 7 log.go:172] (0xc002db24d0) (0xc001227c20) Stream removed, broadcasting: 1 I0412 00:45:43.267132 7 log.go:172] (0xc002db24d0) (0xc001f30000) Stream removed, broadcasting: 3 I0412 00:45:43.267146 7 log.go:172] (0xc002db24d0) (0xc001227cc0) Stream removed, broadcasting: 5 Apr 12 00:45:43.267: INFO: Exec stderr: "" Apr 12 00:45:43.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.267: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.296253 7 log.go:172] (0xc002dc6a50) (0xc00223a3c0) Create stream I0412 00:45:43.296276 7 log.go:172] (0xc002dc6a50) (0xc00223a3c0) Stream added, broadcasting: 1 I0412 00:45:43.299011 7 log.go:172] (0xc002dc6a50) Reply frame received for 1 I0412 00:45:43.299066 7 log.go:172] (0xc002dc6a50) (0xc00223a5a0) Create stream I0412 00:45:43.299088 7 log.go:172] (0xc002dc6a50) (0xc00223a5a0) Stream added, broadcasting: 3 I0412 00:45:43.300073 7 log.go:172] (0xc002dc6a50) Reply frame received for 3 I0412 00:45:43.300118 7 log.go:172] (0xc002dc6a50) (0xc001f300a0) Create stream I0412 00:45:43.300133 7 log.go:172] (0xc002dc6a50) (0xc001f300a0) Stream added, broadcasting: 5 I0412 00:45:43.300979 7 log.go:172] (0xc002dc6a50) Reply frame received for 5 I0412 00:45:43.348601 7 log.go:172] (0xc002dc6a50) Data frame received for 3 I0412 00:45:43.348633 7 log.go:172] (0xc00223a5a0) (3) Data frame handling I0412 00:45:43.348647 7 log.go:172] (0xc00223a5a0) (3) Data frame sent I0412 00:45:43.348665 7 log.go:172] (0xc002dc6a50) Data frame received for 5 I0412 00:45:43.348672 7 log.go:172] (0xc001f300a0) (5) Data frame handling I0412 00:45:43.348802 7 log.go:172] (0xc002dc6a50) Data frame received for 3 I0412 00:45:43.348830 7 log.go:172] (0xc00223a5a0) (3) Data frame handling I0412 00:45:43.350185 7 log.go:172] (0xc002dc6a50) Data frame received for 1 I0412 00:45:43.350217 7 log.go:172] (0xc00223a3c0) (1) Data frame handling I0412 00:45:43.350247 7 log.go:172] (0xc00223a3c0) (1) Data frame sent I0412 00:45:43.350270 7 log.go:172] (0xc002dc6a50) (0xc00223a3c0) Stream removed, broadcasting: 1 I0412 00:45:43.350293 7 log.go:172] (0xc002dc6a50) Go away received I0412 00:45:43.350420 7 log.go:172] (0xc002dc6a50) (0xc00223a3c0) Stream removed, broadcasting: 1 I0412 00:45:43.350452 7 log.go:172] (0xc002dc6a50) (0xc00223a5a0) Stream removed, broadcasting: 3 I0412 00:45:43.350478 7 log.go:172] (0xc002dc6a50) (0xc001f300a0) Stream removed, broadcasting: 5 Apr 12 00:45:43.350: INFO: Exec stderr: "" Apr 12 00:45:43.350: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.350: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.380673 7 log.go:172] (0xc002db2b00) (0xc001227f40) Create stream I0412 00:45:43.380710 7 log.go:172] (0xc002db2b00) (0xc001227f40) Stream added, broadcasting: 1 I0412 00:45:43.383071 7 log.go:172] (0xc002db2b00) Reply frame received for 1 I0412 00:45:43.383112 7 log.go:172] (0xc002db2b00) (0xc001f30140) Create stream I0412 00:45:43.383130 7 log.go:172] (0xc002db2b00) (0xc001f30140) Stream added, broadcasting: 3 I0412 00:45:43.383982 7 log.go:172] (0xc002db2b00) Reply frame received for 3 I0412 00:45:43.384023 7 log.go:172] (0xc002db2b00) (0xc00223a780) Create stream I0412 00:45:43.384039 7 log.go:172] (0xc002db2b00) (0xc00223a780) Stream added, broadcasting: 5 I0412 00:45:43.384999 7 log.go:172] (0xc002db2b00) Reply frame received for 5 I0412 00:45:43.453537 7 log.go:172] (0xc002db2b00) Data frame received for 5 I0412 00:45:43.453583 7 log.go:172] (0xc00223a780) (5) Data frame handling I0412 00:45:43.453632 7 log.go:172] (0xc002db2b00) Data frame received for 3 I0412 00:45:43.453658 7 log.go:172] (0xc001f30140) (3) Data frame handling I0412 00:45:43.453688 7 log.go:172] (0xc001f30140) (3) Data frame sent I0412 00:45:43.453710 7 log.go:172] (0xc002db2b00) Data frame received for 3 I0412 00:45:43.453729 7 log.go:172] (0xc001f30140) (3) Data frame handling I0412 00:45:43.455388 7 log.go:172] (0xc002db2b00) Data frame received for 1 I0412 00:45:43.455435 7 log.go:172] (0xc001227f40) (1) Data frame handling I0412 00:45:43.455479 7 log.go:172] (0xc001227f40) (1) Data frame sent I0412 00:45:43.455517 7 log.go:172] (0xc002db2b00) (0xc001227f40) Stream removed, broadcasting: 1 I0412 00:45:43.455643 7 log.go:172] (0xc002db2b00) (0xc001227f40) Stream removed, broadcasting: 1 I0412 00:45:43.455658 7 log.go:172] (0xc002db2b00) (0xc001f30140) Stream removed, broadcasting: 3 I0412 00:45:43.455796 7 log.go:172] (0xc002db2b00) Go away received I0412 00:45:43.455958 7 log.go:172] (0xc002db2b00) (0xc00223a780) Stream removed, broadcasting: 5 Apr 12 00:45:43.455: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 12 00:45:43.456: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.456: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.481672 7 log.go:172] (0xc002db34a0) (0xc0022ae640) Create stream I0412 00:45:43.481702 7 log.go:172] (0xc002db34a0) (0xc0022ae640) Stream added, broadcasting: 1 I0412 00:45:43.491881 7 log.go:172] (0xc002db34a0) Reply frame received for 1 I0412 00:45:43.491964 7 log.go:172] (0xc002db34a0) (0xc001f30280) Create stream I0412 00:45:43.492005 7 log.go:172] (0xc002db34a0) (0xc001f30280) Stream added, broadcasting: 3 I0412 00:45:43.494052 7 log.go:172] (0xc002db34a0) Reply frame received for 3 I0412 00:45:43.494122 7 log.go:172] (0xc002db34a0) (0xc001f305a0) Create stream I0412 00:45:43.494155 7 log.go:172] (0xc002db34a0) (0xc001f305a0) Stream added, broadcasting: 5 I0412 00:45:43.495045 7 log.go:172] (0xc002db34a0) Reply frame received for 5 I0412 00:45:43.534346 7 log.go:172] (0xc002db34a0) Data frame received for 5 I0412 00:45:43.534452 7 log.go:172] (0xc001f305a0) (5) Data frame handling I0412 00:45:43.534498 7 log.go:172] (0xc002db34a0) Data frame received for 3 I0412 00:45:43.534523 7 log.go:172] (0xc001f30280) (3) Data frame handling I0412 00:45:43.534561 7 log.go:172] (0xc001f30280) (3) Data frame sent I0412 00:45:43.534595 7 log.go:172] (0xc002db34a0) Data frame received for 3 I0412 00:45:43.534611 7 log.go:172] (0xc001f30280) (3) Data frame handling I0412 00:45:43.536142 7 log.go:172] (0xc002db34a0) Data frame received for 1 I0412 00:45:43.536174 7 log.go:172] (0xc0022ae640) (1) Data frame handling I0412 00:45:43.536194 7 log.go:172] (0xc0022ae640) (1) Data frame sent I0412 00:45:43.536216 7 log.go:172] (0xc002db34a0) (0xc0022ae640) Stream removed, broadcasting: 1 I0412 00:45:43.536238 7 log.go:172] (0xc002db34a0) Go away received I0412 00:45:43.536432 7 log.go:172] (0xc002db34a0) (0xc0022ae640) Stream removed, broadcasting: 1 I0412 00:45:43.536459 7 log.go:172] (0xc002db34a0) (0xc001f30280) Stream removed, broadcasting: 3 I0412 00:45:43.536482 7 log.go:172] (0xc002db34a0) (0xc001f305a0) Stream removed, broadcasting: 5 Apr 12 00:45:43.536: INFO: Exec stderr: "" Apr 12 00:45:43.536: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.536: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.572815 7 log.go:172] (0xc002c9e630) (0xc001326500) Create stream I0412 00:45:43.572846 7 log.go:172] (0xc002c9e630) (0xc001326500) Stream added, broadcasting: 1 I0412 00:45:43.575268 7 log.go:172] (0xc002c9e630) Reply frame received for 1 I0412 00:45:43.575320 7 log.go:172] (0xc002c9e630) (0xc00223a960) Create stream I0412 00:45:43.575344 7 log.go:172] (0xc002c9e630) (0xc00223a960) Stream added, broadcasting: 3 I0412 00:45:43.576389 7 log.go:172] (0xc002c9e630) Reply frame received for 3 I0412 00:45:43.576439 7 log.go:172] (0xc002c9e630) (0xc001425cc0) Create stream I0412 00:45:43.576455 7 log.go:172] (0xc002c9e630) (0xc001425cc0) Stream added, broadcasting: 5 I0412 00:45:43.577546 7 log.go:172] (0xc002c9e630) Reply frame received for 5 I0412 00:45:43.642586 7 log.go:172] (0xc002c9e630) Data frame received for 3 I0412 00:45:43.642646 7 log.go:172] (0xc00223a960) (3) Data frame handling I0412 00:45:43.642659 7 log.go:172] (0xc00223a960) (3) Data frame sent I0412 00:45:43.642671 7 log.go:172] (0xc002c9e630) Data frame received for 3 I0412 00:45:43.642685 7 log.go:172] (0xc00223a960) (3) Data frame handling I0412 00:45:43.642708 7 log.go:172] (0xc002c9e630) Data frame received for 5 I0412 00:45:43.642719 7 log.go:172] (0xc001425cc0) (5) Data frame handling I0412 00:45:43.644368 7 log.go:172] (0xc002c9e630) Data frame received for 1 I0412 00:45:43.644405 7 log.go:172] (0xc001326500) (1) Data frame handling I0412 00:45:43.644435 7 log.go:172] (0xc001326500) (1) Data frame sent I0412 00:45:43.644455 7 log.go:172] (0xc002c9e630) (0xc001326500) Stream removed, broadcasting: 1 I0412 00:45:43.644473 7 log.go:172] (0xc002c9e630) Go away received I0412 00:45:43.644641 7 log.go:172] (0xc002c9e630) (0xc001326500) Stream removed, broadcasting: 1 I0412 00:45:43.644673 7 log.go:172] (0xc002c9e630) (0xc00223a960) Stream removed, broadcasting: 3 I0412 00:45:43.644705 7 log.go:172] (0xc002c9e630) (0xc001425cc0) Stream removed, broadcasting: 5 Apr 12 00:45:43.644: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 12 00:45:43.644: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.644: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.676861 7 log.go:172] (0xc002db3ad0) (0xc0022aec80) Create stream I0412 00:45:43.676893 7 log.go:172] (0xc002db3ad0) (0xc0022aec80) Stream added, broadcasting: 1 I0412 00:45:43.679375 7 log.go:172] (0xc002db3ad0) Reply frame received for 1 I0412 00:45:43.679419 7 log.go:172] (0xc002db3ad0) (0xc001425f40) Create stream I0412 00:45:43.679436 7 log.go:172] (0xc002db3ad0) (0xc001425f40) Stream added, broadcasting: 3 I0412 00:45:43.680443 7 log.go:172] (0xc002db3ad0) Reply frame received for 3 I0412 00:45:43.680486 7 log.go:172] (0xc002db3ad0) (0xc0013265a0) Create stream I0412 00:45:43.680501 7 log.go:172] (0xc002db3ad0) (0xc0013265a0) Stream added, broadcasting: 5 I0412 00:45:43.681679 7 log.go:172] (0xc002db3ad0) Reply frame received for 5 I0412 00:45:43.746898 7 log.go:172] (0xc002db3ad0) Data frame received for 3 I0412 00:45:43.746940 7 log.go:172] (0xc001425f40) (3) Data frame handling I0412 00:45:43.746960 7 log.go:172] (0xc001425f40) (3) Data frame sent I0412 00:45:43.746995 7 log.go:172] (0xc002db3ad0) Data frame received for 5 I0412 00:45:43.747042 7 log.go:172] (0xc0013265a0) (5) Data frame handling I0412 00:45:43.747065 7 log.go:172] (0xc002db3ad0) Data frame received for 3 I0412 00:45:43.747102 7 log.go:172] (0xc001425f40) (3) Data frame handling I0412 00:45:43.748224 7 log.go:172] (0xc002db3ad0) Data frame received for 1 I0412 00:45:43.748243 7 log.go:172] (0xc0022aec80) (1) Data frame handling I0412 00:45:43.748254 7 log.go:172] (0xc0022aec80) (1) Data frame sent I0412 00:45:43.748267 7 log.go:172] (0xc002db3ad0) (0xc0022aec80) Stream removed, broadcasting: 1 I0412 00:45:43.748295 7 log.go:172] (0xc002db3ad0) Go away received I0412 00:45:43.748375 7 log.go:172] (0xc002db3ad0) (0xc0022aec80) Stream removed, broadcasting: 1 I0412 00:45:43.748398 7 log.go:172] (0xc002db3ad0) (0xc001425f40) Stream removed, broadcasting: 3 I0412 00:45:43.748409 7 log.go:172] (0xc002db3ad0) (0xc0013265a0) Stream removed, broadcasting: 5 Apr 12 00:45:43.748: INFO: Exec stderr: "" Apr 12 00:45:43.748: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.748: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.777540 7 log.go:172] (0xc00274e160) (0xc0022af2c0) Create stream I0412 00:45:43.777564 7 log.go:172] (0xc00274e160) (0xc0022af2c0) Stream added, broadcasting: 1 I0412 00:45:43.779932 7 log.go:172] (0xc00274e160) Reply frame received for 1 I0412 00:45:43.779964 7 log.go:172] (0xc00274e160) (0xc000d38000) Create stream I0412 00:45:43.779977 7 log.go:172] (0xc00274e160) (0xc000d38000) Stream added, broadcasting: 3 I0412 00:45:43.781004 7 log.go:172] (0xc00274e160) Reply frame received for 3 I0412 00:45:43.781057 7 log.go:172] (0xc00274e160) (0xc0013266e0) Create stream I0412 00:45:43.781073 7 log.go:172] (0xc00274e160) (0xc0013266e0) Stream added, broadcasting: 5 I0412 00:45:43.782094 7 log.go:172] (0xc00274e160) Reply frame received for 5 I0412 00:45:43.855349 7 log.go:172] (0xc00274e160) Data frame received for 3 I0412 00:45:43.855381 7 log.go:172] (0xc000d38000) (3) Data frame handling I0412 00:45:43.855398 7 log.go:172] (0xc000d38000) (3) Data frame sent I0412 00:45:43.855412 7 log.go:172] (0xc00274e160) Data frame received for 3 I0412 00:45:43.855423 7 log.go:172] (0xc000d38000) (3) Data frame handling I0412 00:45:43.855536 7 log.go:172] (0xc00274e160) Data frame received for 5 I0412 00:45:43.855563 7 log.go:172] (0xc0013266e0) (5) Data frame handling I0412 00:45:43.856612 7 log.go:172] (0xc00274e160) Data frame received for 1 I0412 00:45:43.856631 7 log.go:172] (0xc0022af2c0) (1) Data frame handling I0412 00:45:43.856657 7 log.go:172] (0xc0022af2c0) (1) Data frame sent I0412 00:45:43.856833 7 log.go:172] (0xc00274e160) (0xc0022af2c0) Stream removed, broadcasting: 1 I0412 00:45:43.856859 7 log.go:172] (0xc00274e160) Go away received I0412 00:45:43.856962 7 log.go:172] (0xc00274e160) (0xc0022af2c0) Stream removed, broadcasting: 1 I0412 00:45:43.856990 7 log.go:172] (0xc00274e160) (0xc000d38000) Stream removed, broadcasting: 3 I0412 00:45:43.857007 7 log.go:172] (0xc00274e160) (0xc0013266e0) Stream removed, broadcasting: 5 Apr 12 00:45:43.857: INFO: Exec stderr: "" Apr 12 00:45:43.857: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.857: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.885502 7 log.go:172] (0xc002c9e9a0) (0xc001326b40) Create stream I0412 00:45:43.885543 7 log.go:172] (0xc002c9e9a0) (0xc001326b40) Stream added, broadcasting: 1 I0412 00:45:43.888189 7 log.go:172] (0xc002c9e9a0) Reply frame received for 1 I0412 00:45:43.888234 7 log.go:172] (0xc002c9e9a0) (0xc0022af5e0) Create stream I0412 00:45:43.888242 7 log.go:172] (0xc002c9e9a0) (0xc0022af5e0) Stream added, broadcasting: 3 I0412 00:45:43.889461 7 log.go:172] (0xc002c9e9a0) Reply frame received for 3 I0412 00:45:43.889499 7 log.go:172] (0xc002c9e9a0) (0xc00223ac80) Create stream I0412 00:45:43.889510 7 log.go:172] (0xc002c9e9a0) (0xc00223ac80) Stream added, broadcasting: 5 I0412 00:45:43.890423 7 log.go:172] (0xc002c9e9a0) Reply frame received for 5 I0412 00:45:43.954516 7 log.go:172] (0xc002c9e9a0) Data frame received for 5 I0412 00:45:43.954565 7 log.go:172] (0xc00223ac80) (5) Data frame handling I0412 00:45:43.954604 7 log.go:172] (0xc002c9e9a0) Data frame received for 3 I0412 00:45:43.954615 7 log.go:172] (0xc0022af5e0) (3) Data frame handling I0412 00:45:43.954635 7 log.go:172] (0xc0022af5e0) (3) Data frame sent I0412 00:45:43.954646 7 log.go:172] (0xc002c9e9a0) Data frame received for 3 I0412 00:45:43.954661 7 log.go:172] (0xc0022af5e0) (3) Data frame handling I0412 00:45:43.955871 7 log.go:172] (0xc002c9e9a0) Data frame received for 1 I0412 00:45:43.955896 7 log.go:172] (0xc001326b40) (1) Data frame handling I0412 00:45:43.955927 7 log.go:172] (0xc001326b40) (1) Data frame sent I0412 00:45:43.956105 7 log.go:172] (0xc002c9e9a0) (0xc001326b40) Stream removed, broadcasting: 1 I0412 00:45:43.956144 7 log.go:172] (0xc002c9e9a0) Go away received I0412 00:45:43.956260 7 log.go:172] (0xc002c9e9a0) (0xc001326b40) Stream removed, broadcasting: 1 I0412 00:45:43.956293 7 log.go:172] (0xc002c9e9a0) (0xc0022af5e0) Stream removed, broadcasting: 3 I0412 00:45:43.956317 7 log.go:172] (0xc002c9e9a0) (0xc00223ac80) Stream removed, broadcasting: 5 Apr 12 00:45:43.956: INFO: Exec stderr: "" Apr 12 00:45:43.956: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:45:43.956: INFO: >>> kubeConfig: /root/.kube/config I0412 00:45:43.990969 7 log.go:172] (0xc002dc7080) (0xc00223b180) Create stream I0412 00:45:43.990996 7 log.go:172] (0xc002dc7080) (0xc00223b180) Stream added, broadcasting: 1 I0412 00:45:43.993564 7 log.go:172] (0xc002dc7080) Reply frame received for 1 I0412 00:45:43.993596 7 log.go:172] (0xc002dc7080) (0xc0022af680) Create stream I0412 00:45:43.993604 7 log.go:172] (0xc002dc7080) (0xc0022af680) Stream added, broadcasting: 3 I0412 00:45:43.994567 7 log.go:172] (0xc002dc7080) Reply frame received for 3 I0412 00:45:43.994600 7 log.go:172] (0xc002dc7080) (0xc0022af720) Create stream I0412 00:45:43.994612 7 log.go:172] (0xc002dc7080) (0xc0022af720) Stream added, broadcasting: 5 I0412 00:45:43.995507 7 log.go:172] (0xc002dc7080) Reply frame received for 5 I0412 00:45:44.065509 7 log.go:172] (0xc002dc7080) Data frame received for 3 I0412 00:45:44.065554 7 log.go:172] (0xc0022af680) (3) Data frame handling I0412 00:45:44.065589 7 log.go:172] (0xc0022af680) (3) Data frame sent I0412 00:45:44.065727 7 log.go:172] (0xc002dc7080) Data frame received for 5 I0412 00:45:44.065776 7 log.go:172] (0xc0022af720) (5) Data frame handling I0412 00:45:44.065816 7 log.go:172] (0xc002dc7080) Data frame received for 3 I0412 00:45:44.065856 7 log.go:172] (0xc0022af680) (3) Data frame handling I0412 00:45:44.067353 7 log.go:172] (0xc002dc7080) Data frame received for 1 I0412 00:45:44.067390 7 log.go:172] (0xc00223b180) (1) Data frame handling I0412 00:45:44.067412 7 log.go:172] (0xc00223b180) (1) Data frame sent I0412 00:45:44.067434 7 log.go:172] (0xc002dc7080) (0xc00223b180) Stream removed, broadcasting: 1 I0412 00:45:44.067462 7 log.go:172] (0xc002dc7080) Go away received I0412 00:45:44.067576 7 log.go:172] (0xc002dc7080) (0xc00223b180) Stream removed, broadcasting: 1 I0412 00:45:44.067611 7 log.go:172] (0xc002dc7080) (0xc0022af680) Stream removed, broadcasting: 3 I0412 00:45:44.067631 7 log.go:172] (0xc002dc7080) (0xc0022af720) Stream removed, broadcasting: 5 Apr 12 00:45:44.067: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:45:44.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9772" for this suite. • [SLOW TEST:11.148 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4311,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:45:44.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:46:01.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9131" for this suite. • [SLOW TEST:17.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":251,"skipped":4324,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:46:01.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:46:01.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3655" for this suite. STEP: Destroying namespace "nspatchtest-026c3f57-bb27-471d-b97f-e63c67595077-1493" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":252,"skipped":4328,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:46:01.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 12 00:46:01.429: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:46:18.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6215" for this suite. • [SLOW TEST:17.226 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":253,"skipped":4336,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:46:18.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:46:23.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2753" for this suite. • [SLOW TEST:5.137 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":254,"skipped":4339,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:46:23.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3265 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-3265 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3265 Apr 12 00:46:23.836: INFO: Found 0 stateful pods, waiting for 1 Apr 12 00:46:33.840: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 12 00:46:33.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:46:36.573: INFO: stderr: "I0412 00:46:36.434779 3116 log.go:172] (0xc0008e6630) (0xc0006d3540) Create stream\nI0412 00:46:36.434836 3116 log.go:172] (0xc0008e6630) (0xc0006d3540) Stream added, broadcasting: 1\nI0412 00:46:36.437955 3116 log.go:172] (0xc0008e6630) Reply frame received for 1\nI0412 00:46:36.438014 3116 log.go:172] (0xc0008e6630) (0xc0007ee000) Create stream\nI0412 00:46:36.438030 3116 log.go:172] (0xc0008e6630) (0xc0007ee000) Stream added, broadcasting: 3\nI0412 00:46:36.439020 3116 log.go:172] (0xc0008e6630) Reply frame received for 3\nI0412 00:46:36.439062 3116 log.go:172] (0xc0008e6630) (0xc0006d35e0) Create stream\nI0412 00:46:36.439073 3116 log.go:172] (0xc0008e6630) (0xc0006d35e0) Stream added, broadcasting: 5\nI0412 00:46:36.440083 3116 log.go:172] (0xc0008e6630) Reply frame received for 5\nI0412 00:46:36.525886 3116 log.go:172] (0xc0008e6630) Data frame received for 5\nI0412 00:46:36.525923 3116 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0412 00:46:36.525951 3116 log.go:172] (0xc0006d35e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:46:36.564947 3116 log.go:172] (0xc0008e6630) Data frame received for 3\nI0412 00:46:36.564997 3116 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0412 00:46:36.565033 3116 log.go:172] (0xc0007ee000) (3) Data frame sent\nI0412 00:46:36.565801 3116 log.go:172] (0xc0008e6630) Data frame received for 3\nI0412 00:46:36.565830 3116 log.go:172] (0xc0008e6630) Data frame received for 5\nI0412 00:46:36.565861 3116 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0412 00:46:36.565885 3116 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0412 00:46:36.567471 3116 log.go:172] (0xc0008e6630) Data frame received for 1\nI0412 00:46:36.567493 3116 log.go:172] (0xc0006d3540) (1) Data frame handling\nI0412 00:46:36.567506 3116 log.go:172] (0xc0006d3540) (1) Data frame sent\nI0412 00:46:36.567521 3116 log.go:172] (0xc0008e6630) (0xc0006d3540) Stream removed, broadcasting: 1\nI0412 00:46:36.567604 3116 log.go:172] (0xc0008e6630) Go away received\nI0412 00:46:36.567914 3116 log.go:172] (0xc0008e6630) (0xc0006d3540) Stream removed, broadcasting: 1\nI0412 00:46:36.567932 3116 log.go:172] (0xc0008e6630) (0xc0007ee000) Stream removed, broadcasting: 3\nI0412 00:46:36.567943 3116 log.go:172] (0xc0008e6630) (0xc0006d35e0) Stream removed, broadcasting: 5\n" Apr 12 00:46:36.573: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:46:36.573: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:46:36.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 12 00:46:46.580: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:46:46.580: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:46:46.593: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:46:46.593: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC }] Apr 12 00:46:46.593: INFO: Apr 12 00:46:46.593: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 12 00:46:47.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996325948s Apr 12 00:46:48.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991224299s Apr 12 00:46:49.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.823968575s Apr 12 00:46:50.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.807461471s Apr 12 00:46:51.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.787429919s Apr 12 00:46:52.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.782718722s Apr 12 00:46:53.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.778227246s Apr 12 00:46:54.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.773431227s Apr 12 00:46:55.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 769.137097ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3265 Apr 12 00:46:56.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:46:57.076: INFO: stderr: "I0412 00:46:56.962390 3149 log.go:172] (0xc000a0c000) (0xc000994000) Create stream\nI0412 00:46:56.962441 3149 log.go:172] (0xc000a0c000) (0xc000994000) Stream added, broadcasting: 1\nI0412 00:46:56.964720 3149 log.go:172] (0xc000a0c000) Reply frame received for 1\nI0412 00:46:56.964774 3149 log.go:172] (0xc000a0c000) (0xc000a76000) Create stream\nI0412 00:46:56.964789 3149 log.go:172] (0xc000a0c000) (0xc000a76000) Stream added, broadcasting: 3\nI0412 00:46:56.966037 3149 log.go:172] (0xc000a0c000) Reply frame received for 3\nI0412 00:46:56.966071 3149 log.go:172] (0xc000a0c000) (0xc000681220) Create stream\nI0412 00:46:56.966085 3149 log.go:172] (0xc000a0c000) (0xc000681220) Stream added, broadcasting: 5\nI0412 00:46:56.966995 3149 log.go:172] (0xc000a0c000) Reply frame received for 5\nI0412 00:46:57.068793 3149 log.go:172] (0xc000a0c000) Data frame received for 3\nI0412 00:46:57.068843 3149 log.go:172] (0xc000a76000) (3) Data frame handling\nI0412 00:46:57.068876 3149 log.go:172] (0xc000a76000) (3) Data frame sent\nI0412 00:46:57.068906 3149 log.go:172] (0xc000a0c000) Data frame received for 3\nI0412 00:46:57.068925 3149 log.go:172] (0xc000a76000) (3) Data frame handling\nI0412 00:46:57.068949 3149 log.go:172] (0xc000a0c000) Data frame received for 5\nI0412 00:46:57.068974 3149 log.go:172] (0xc000681220) (5) Data frame handling\nI0412 00:46:57.068993 3149 log.go:172] (0xc000681220) (5) Data frame sent\nI0412 00:46:57.069005 3149 log.go:172] (0xc000a0c000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0412 00:46:57.069015 3149 log.go:172] (0xc000681220) (5) Data frame handling\nI0412 00:46:57.070846 3149 log.go:172] (0xc000a0c000) Data frame received for 1\nI0412 00:46:57.070868 3149 log.go:172] (0xc000994000) (1) Data frame handling\nI0412 00:46:57.070878 3149 log.go:172] (0xc000994000) (1) Data frame sent\nI0412 00:46:57.070886 3149 log.go:172] (0xc000a0c000) (0xc000994000) Stream removed, broadcasting: 1\nI0412 00:46:57.070942 3149 log.go:172] (0xc000a0c000) Go away received\nI0412 00:46:57.071135 3149 log.go:172] (0xc000a0c000) (0xc000994000) Stream removed, broadcasting: 1\nI0412 00:46:57.071147 3149 log.go:172] (0xc000a0c000) (0xc000a76000) Stream removed, broadcasting: 3\nI0412 00:46:57.071155 3149 log.go:172] (0xc000a0c000) (0xc000681220) Stream removed, broadcasting: 5\n" Apr 12 00:46:57.076: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:46:57.076: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:46:57.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:46:57.282: INFO: stderr: "I0412 00:46:57.201846 3170 log.go:172] (0xc0000e0790) (0xc000611540) Create stream\nI0412 00:46:57.201924 3170 log.go:172] (0xc0000e0790) (0xc000611540) Stream added, broadcasting: 1\nI0412 00:46:57.205035 3170 log.go:172] (0xc0000e0790) Reply frame received for 1\nI0412 00:46:57.205077 3170 log.go:172] (0xc0000e0790) (0xc0009e0000) Create stream\nI0412 00:46:57.205091 3170 log.go:172] (0xc0000e0790) (0xc0009e0000) Stream added, broadcasting: 3\nI0412 00:46:57.206547 3170 log.go:172] (0xc0000e0790) Reply frame received for 3\nI0412 00:46:57.206609 3170 log.go:172] (0xc0000e0790) (0xc0009e00a0) Create stream\nI0412 00:46:57.206637 3170 log.go:172] (0xc0000e0790) (0xc0009e00a0) Stream added, broadcasting: 5\nI0412 00:46:57.207895 3170 log.go:172] (0xc0000e0790) Reply frame received for 5\nI0412 00:46:57.275500 3170 log.go:172] (0xc0000e0790) Data frame received for 3\nI0412 00:46:57.275530 3170 log.go:172] (0xc0009e0000) (3) Data frame handling\nI0412 00:46:57.275539 3170 log.go:172] (0xc0009e0000) (3) Data frame sent\nI0412 00:46:57.275549 3170 log.go:172] (0xc0000e0790) Data frame received for 3\nI0412 00:46:57.275559 3170 log.go:172] (0xc0009e0000) (3) Data frame handling\nI0412 00:46:57.275569 3170 log.go:172] (0xc0000e0790) Data frame received for 5\nI0412 00:46:57.275575 3170 log.go:172] (0xc0009e00a0) (5) Data frame handling\nI0412 00:46:57.275585 3170 log.go:172] (0xc0009e00a0) (5) Data frame sent\nI0412 00:46:57.275591 3170 log.go:172] (0xc0000e0790) Data frame received for 5\nI0412 00:46:57.275596 3170 log.go:172] (0xc0009e00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0412 00:46:57.277283 3170 log.go:172] (0xc0000e0790) Data frame received for 1\nI0412 00:46:57.277300 3170 log.go:172] (0xc000611540) (1) Data frame handling\nI0412 00:46:57.277318 3170 log.go:172] (0xc000611540) (1) Data frame sent\nI0412 00:46:57.277330 3170 log.go:172] (0xc0000e0790) (0xc000611540) Stream removed, broadcasting: 1\nI0412 00:46:57.277368 3170 log.go:172] (0xc0000e0790) Go away received\nI0412 00:46:57.277643 3170 log.go:172] (0xc0000e0790) (0xc000611540) Stream removed, broadcasting: 1\nI0412 00:46:57.277663 3170 log.go:172] (0xc0000e0790) (0xc0009e0000) Stream removed, broadcasting: 3\nI0412 00:46:57.277674 3170 log.go:172] (0xc0000e0790) (0xc0009e00a0) Stream removed, broadcasting: 5\n" Apr 12 00:46:57.282: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:46:57.282: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:46:57.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 12 00:46:57.486: INFO: stderr: "I0412 00:46:57.413479 3190 log.go:172] (0xc0009d3290) (0xc000ab06e0) Create stream\nI0412 00:46:57.413521 3190 log.go:172] (0xc0009d3290) (0xc000ab06e0) Stream added, broadcasting: 1\nI0412 00:46:57.416400 3190 log.go:172] (0xc0009d3290) Reply frame received for 1\nI0412 00:46:57.416429 3190 log.go:172] (0xc0009d3290) (0xc00068d540) Create stream\nI0412 00:46:57.416439 3190 log.go:172] (0xc0009d3290) (0xc00068d540) Stream added, broadcasting: 3\nI0412 00:46:57.417243 3190 log.go:172] (0xc0009d3290) Reply frame received for 3\nI0412 00:46:57.417272 3190 log.go:172] (0xc0009d3290) (0xc00052a960) Create stream\nI0412 00:46:57.417278 3190 log.go:172] (0xc0009d3290) (0xc00052a960) Stream added, broadcasting: 5\nI0412 00:46:57.417914 3190 log.go:172] (0xc0009d3290) Reply frame received for 5\nI0412 00:46:57.479904 3190 log.go:172] (0xc0009d3290) Data frame received for 5\nI0412 00:46:57.479934 3190 log.go:172] (0xc00052a960) (5) Data frame handling\nI0412 00:46:57.479948 3190 log.go:172] (0xc00052a960) (5) Data frame sent\nI0412 00:46:57.479954 3190 log.go:172] (0xc0009d3290) Data frame received for 5\nI0412 00:46:57.479959 3190 log.go:172] (0xc00052a960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0412 00:46:57.479979 3190 log.go:172] (0xc0009d3290) Data frame received for 3\nI0412 00:46:57.479987 3190 log.go:172] (0xc00068d540) (3) Data frame handling\nI0412 00:46:57.479995 3190 log.go:172] (0xc00068d540) (3) Data frame sent\nI0412 00:46:57.480001 3190 log.go:172] (0xc0009d3290) Data frame received for 3\nI0412 00:46:57.480008 3190 log.go:172] (0xc00068d540) (3) Data frame handling\nI0412 00:46:57.482110 3190 log.go:172] (0xc0009d3290) Data frame received for 1\nI0412 00:46:57.482143 3190 log.go:172] (0xc000ab06e0) (1) Data frame handling\nI0412 00:46:57.482164 3190 log.go:172] (0xc000ab06e0) (1) Data frame sent\nI0412 00:46:57.482181 3190 log.go:172] (0xc0009d3290) (0xc000ab06e0) Stream removed, broadcasting: 1\nI0412 00:46:57.482203 3190 log.go:172] (0xc0009d3290) Go away received\nI0412 00:46:57.482498 3190 log.go:172] (0xc0009d3290) (0xc000ab06e0) Stream removed, broadcasting: 1\nI0412 00:46:57.482517 3190 log.go:172] (0xc0009d3290) (0xc00068d540) Stream removed, broadcasting: 3\nI0412 00:46:57.482526 3190 log.go:172] (0xc0009d3290) (0xc00052a960) Stream removed, broadcasting: 5\n" Apr 12 00:46:57.486: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 12 00:46:57.486: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 12 00:46:57.489: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:46:57.489: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 12 00:46:57.489: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 12 00:46:57.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:46:57.675: INFO: stderr: "I0412 00:46:57.614565 3211 log.go:172] (0xc0003f66e0) (0xc0008c21e0) Create stream\nI0412 00:46:57.614658 3211 log.go:172] (0xc0003f66e0) (0xc0008c21e0) Stream added, broadcasting: 1\nI0412 00:46:57.617920 3211 log.go:172] (0xc0003f66e0) Reply frame received for 1\nI0412 00:46:57.617975 3211 log.go:172] (0xc0003f66e0) (0xc0006aa000) Create stream\nI0412 00:46:57.617991 3211 log.go:172] (0xc0003f66e0) (0xc0006aa000) Stream added, broadcasting: 3\nI0412 00:46:57.618978 3211 log.go:172] (0xc0003f66e0) Reply frame received for 3\nI0412 00:46:57.619025 3211 log.go:172] (0xc0003f66e0) (0xc0008c2280) Create stream\nI0412 00:46:57.619053 3211 log.go:172] (0xc0003f66e0) (0xc0008c2280) Stream added, broadcasting: 5\nI0412 00:46:57.620445 3211 log.go:172] (0xc0003f66e0) Reply frame received for 5\nI0412 00:46:57.669343 3211 log.go:172] (0xc0003f66e0) Data frame received for 5\nI0412 00:46:57.669408 3211 log.go:172] (0xc0008c2280) (5) Data frame handling\nI0412 00:46:57.669426 3211 log.go:172] (0xc0008c2280) (5) Data frame sent\nI0412 00:46:57.669438 3211 log.go:172] (0xc0003f66e0) Data frame received for 5\nI0412 00:46:57.669449 3211 log.go:172] (0xc0008c2280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:46:57.669500 3211 log.go:172] (0xc0003f66e0) Data frame received for 3\nI0412 00:46:57.669526 3211 log.go:172] (0xc0006aa000) (3) Data frame handling\nI0412 00:46:57.669550 3211 log.go:172] (0xc0006aa000) (3) Data frame sent\nI0412 00:46:57.669563 3211 log.go:172] (0xc0003f66e0) Data frame received for 3\nI0412 00:46:57.669574 3211 log.go:172] (0xc0006aa000) (3) Data frame handling\nI0412 00:46:57.670945 3211 log.go:172] (0xc0003f66e0) Data frame received for 1\nI0412 00:46:57.670979 3211 log.go:172] (0xc0008c21e0) (1) Data frame handling\nI0412 00:46:57.670999 3211 log.go:172] (0xc0008c21e0) (1) Data frame sent\nI0412 00:46:57.671019 3211 log.go:172] (0xc0003f66e0) (0xc0008c21e0) Stream removed, broadcasting: 1\nI0412 00:46:57.671144 3211 log.go:172] (0xc0003f66e0) Go away received\nI0412 00:46:57.671470 3211 log.go:172] (0xc0003f66e0) (0xc0008c21e0) Stream removed, broadcasting: 1\nI0412 00:46:57.671493 3211 log.go:172] (0xc0003f66e0) (0xc0006aa000) Stream removed, broadcasting: 3\nI0412 00:46:57.671505 3211 log.go:172] (0xc0003f66e0) (0xc0008c2280) Stream removed, broadcasting: 5\n" Apr 12 00:46:57.676: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:46:57.676: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:46:57.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:46:57.902: INFO: stderr: "I0412 00:46:57.806577 3233 log.go:172] (0xc0000e0c60) (0xc000b0a000) Create stream\nI0412 00:46:57.806631 3233 log.go:172] (0xc0000e0c60) (0xc000b0a000) Stream added, broadcasting: 1\nI0412 00:46:57.809800 3233 log.go:172] (0xc0000e0c60) Reply frame received for 1\nI0412 00:46:57.809848 3233 log.go:172] (0xc0000e0c60) (0xc000b0a0a0) Create stream\nI0412 00:46:57.809864 3233 log.go:172] (0xc0000e0c60) (0xc000b0a0a0) Stream added, broadcasting: 3\nI0412 00:46:57.810890 3233 log.go:172] (0xc0000e0c60) Reply frame received for 3\nI0412 00:46:57.810919 3233 log.go:172] (0xc0000e0c60) (0xc000b0a140) Create stream\nI0412 00:46:57.810937 3233 log.go:172] (0xc0000e0c60) (0xc000b0a140) Stream added, broadcasting: 5\nI0412 00:46:57.811876 3233 log.go:172] (0xc0000e0c60) Reply frame received for 5\nI0412 00:46:57.865311 3233 log.go:172] (0xc0000e0c60) Data frame received for 5\nI0412 00:46:57.865339 3233 log.go:172] (0xc000b0a140) (5) Data frame handling\nI0412 00:46:57.865358 3233 log.go:172] (0xc000b0a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:46:57.894916 3233 log.go:172] (0xc0000e0c60) Data frame received for 3\nI0412 00:46:57.895086 3233 log.go:172] (0xc000b0a0a0) (3) Data frame handling\nI0412 00:46:57.895200 3233 log.go:172] (0xc000b0a0a0) (3) Data frame sent\nI0412 00:46:57.895291 3233 log.go:172] (0xc0000e0c60) Data frame received for 3\nI0412 00:46:57.895322 3233 log.go:172] (0xc000b0a0a0) (3) Data frame handling\nI0412 00:46:57.895348 3233 log.go:172] (0xc0000e0c60) Data frame received for 5\nI0412 00:46:57.895367 3233 log.go:172] (0xc000b0a140) (5) Data frame handling\nI0412 00:46:57.897813 3233 log.go:172] (0xc0000e0c60) Data frame received for 1\nI0412 00:46:57.897843 3233 log.go:172] (0xc000b0a000) (1) Data frame handling\nI0412 00:46:57.897874 3233 log.go:172] (0xc000b0a000) (1) Data frame sent\nI0412 00:46:57.897903 3233 log.go:172] (0xc0000e0c60) (0xc000b0a000) Stream removed, broadcasting: 1\nI0412 00:46:57.897933 3233 log.go:172] (0xc0000e0c60) Go away received\nI0412 00:46:57.898430 3233 log.go:172] (0xc0000e0c60) (0xc000b0a000) Stream removed, broadcasting: 1\nI0412 00:46:57.898454 3233 log.go:172] (0xc0000e0c60) (0xc000b0a0a0) Stream removed, broadcasting: 3\nI0412 00:46:57.898466 3233 log.go:172] (0xc0000e0c60) (0xc000b0a140) Stream removed, broadcasting: 5\n" Apr 12 00:46:57.903: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:46:57.903: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:46:57.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3265 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 12 00:46:58.136: INFO: stderr: "I0412 00:46:58.040433 3256 log.go:172] (0xc0000e8630) (0xc0006b3360) Create stream\nI0412 00:46:58.040481 3256 log.go:172] (0xc0000e8630) (0xc0006b3360) Stream added, broadcasting: 1\nI0412 00:46:58.043007 3256 log.go:172] (0xc0000e8630) Reply frame received for 1\nI0412 00:46:58.043047 3256 log.go:172] (0xc0000e8630) (0xc0006b3400) Create stream\nI0412 00:46:58.043061 3256 log.go:172] (0xc0000e8630) (0xc0006b3400) Stream added, broadcasting: 3\nI0412 00:46:58.044070 3256 log.go:172] (0xc0000e8630) Reply frame received for 3\nI0412 00:46:58.044109 3256 log.go:172] (0xc0000e8630) (0xc0006b34a0) Create stream\nI0412 00:46:58.044122 3256 log.go:172] (0xc0000e8630) (0xc0006b34a0) Stream added, broadcasting: 5\nI0412 00:46:58.044916 3256 log.go:172] (0xc0000e8630) Reply frame received for 5\nI0412 00:46:58.097457 3256 log.go:172] (0xc0000e8630) Data frame received for 5\nI0412 00:46:58.097482 3256 log.go:172] (0xc0006b34a0) (5) Data frame handling\nI0412 00:46:58.097503 3256 log.go:172] (0xc0006b34a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0412 00:46:58.128969 3256 log.go:172] (0xc0000e8630) Data frame received for 3\nI0412 00:46:58.128996 3256 log.go:172] (0xc0006b3400) (3) Data frame handling\nI0412 00:46:58.129028 3256 log.go:172] (0xc0006b3400) (3) Data frame sent\nI0412 00:46:58.129044 3256 log.go:172] (0xc0000e8630) Data frame received for 3\nI0412 00:46:58.129056 3256 log.go:172] (0xc0006b3400) (3) Data frame handling\nI0412 00:46:58.129621 3256 log.go:172] (0xc0000e8630) Data frame received for 5\nI0412 00:46:58.129650 3256 log.go:172] (0xc0006b34a0) (5) Data frame handling\nI0412 00:46:58.131313 3256 log.go:172] (0xc0000e8630) Data frame received for 1\nI0412 00:46:58.131332 3256 log.go:172] (0xc0006b3360) (1) Data frame handling\nI0412 00:46:58.131357 3256 log.go:172] (0xc0006b3360) (1) Data frame sent\nI0412 00:46:58.131372 3256 log.go:172] (0xc0000e8630) (0xc0006b3360) Stream removed, broadcasting: 1\nI0412 00:46:58.131475 3256 log.go:172] (0xc0000e8630) Go away received\nI0412 00:46:58.131783 3256 log.go:172] (0xc0000e8630) (0xc0006b3360) Stream removed, broadcasting: 1\nI0412 00:46:58.131811 3256 log.go:172] (0xc0000e8630) (0xc0006b3400) Stream removed, broadcasting: 3\nI0412 00:46:58.131826 3256 log.go:172] (0xc0000e8630) (0xc0006b34a0) Stream removed, broadcasting: 5\n" Apr 12 00:46:58.136: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 12 00:46:58.136: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 12 00:46:58.137: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:46:58.140: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 12 00:47:08.149: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:47:08.149: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:47:08.149: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 12 00:47:08.162: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:47:08.162: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC }] Apr 12 00:47:08.162: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:08.162: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:08.162: INFO: Apr 12 00:47:08.162: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 12 00:47:09.184: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:47:09.184: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC }] Apr 12 00:47:09.184: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:09.184: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:09.184: INFO: Apr 12 00:47:09.184: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 12 00:47:10.190: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:47:10.190: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:23 +0000 UTC }] Apr 12 00:47:10.190: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:10.190: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:10.190: INFO: Apr 12 00:47:10.190: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 12 00:47:11.194: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:47:11.194: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:11.194: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:11.194: INFO: Apr 12 00:47:11.194: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 12 00:47:12.199: INFO: POD NODE PHASE GRACE CONDITIONS Apr 12 00:47:12.199: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:12.199: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-12 00:46:46 +0000 UTC }] Apr 12 00:47:12.199: INFO: Apr 12 00:47:12.199: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 12 00:47:13.202: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.956543582s Apr 12 00:47:14.206: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.95359776s Apr 12 00:47:15.213: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.949299982s Apr 12 00:47:16.218: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.94207293s Apr 12 00:47:17.221: INFO: Verifying statefulset ss doesn't scale past 0 for another 937.856065ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3265 Apr 12 00:47:18.225: INFO: Scaling statefulset ss to 0 Apr 12 00:47:18.236: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 12 00:47:18.239: INFO: Deleting all statefulset in ns statefulset-3265 Apr 12 00:47:18.242: INFO: Scaling statefulset ss to 0 Apr 12 00:47:18.251: INFO: Waiting for statefulset status.replicas updated to 0 Apr 12 00:47:18.254: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:47:18.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3265" for this suite. • [SLOW TEST:54.567 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":255,"skipped":4342,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:47:18.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 12 00:47:18.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8849' Apr 12 00:47:18.425: INFO: stderr: "" Apr 12 00:47:18.425: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 12 00:47:18.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8849' Apr 12 00:47:32.749: INFO: stderr: "" Apr 12 00:47:32.749: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:47:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8849" for this suite. • [SLOW TEST:14.471 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":256,"skipped":4347,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:47:32.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:47:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1410" for this suite. • [SLOW TEST:11.160 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":257,"skipped":4360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:47:43.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 12 00:47:43.998: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 12 00:47:44.565: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 12 00:47:46.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249264, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249264, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249264, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722249264, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 12 00:47:49.297: INFO: Waited 590.59021ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:47:49.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5936" for this suite. • [SLOW TEST:5.914 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":258,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:47:49.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 12 00:47:50.179: INFO: Waiting up to 5m0s for pod "var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e" in namespace "var-expansion-7482" to be "Succeeded or Failed" Apr 12 00:47:50.201: INFO: Pod "var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.527234ms Apr 12 00:47:52.206: INFO: Pod "var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026337205s Apr 12 00:47:54.210: INFO: Pod "var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030541414s STEP: Saw pod success Apr 12 00:47:54.210: INFO: Pod "var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e" satisfied condition "Succeeded or Failed" Apr 12 00:47:54.214: INFO: Trying to get logs from node latest-worker pod var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e container dapi-container: STEP: delete the pod Apr 12 00:47:54.269: INFO: Waiting for pod var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e to disappear Apr 12 00:47:54.275: INFO: Pod var-expansion-694a9c2c-a60d-4d1d-bbd4-d8da988be99e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:47:54.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7482" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4444,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:47:54.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:47:54.343: INFO: Creating deployment "webserver-deployment" Apr 12 00:47:54.347: INFO: Waiting for observed generation 1 Apr 12 00:47:56.356: INFO: Waiting for all required pods to come up Apr 12 00:47:56.359: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 12 00:48:04.379: INFO: Waiting for deployment "webserver-deployment" to complete Apr 12 00:48:04.386: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 12 00:48:04.392: INFO: Updating deployment webserver-deployment Apr 12 00:48:04.392: INFO: Waiting for observed generation 2 Apr 12 00:48:06.415: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 12 00:48:06.418: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 12 00:48:06.421: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 12 00:48:06.429: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 12 00:48:06.429: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 12 00:48:06.432: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 12 00:48:06.437: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 12 00:48:06.437: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 12 00:48:06.443: INFO: Updating deployment webserver-deployment Apr 12 00:48:06.443: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 12 00:48:06.508: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 12 00:48:06.532: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 12 00:48:06.664: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-924 /apis/apps/v1/namespaces/deployment-924/deployments/webserver-deployment 5ab6742c-09bd-4cd8-91ec-dfbe211667c6 7353043 3 2020-04-12 00:47:54 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039c29d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-12 00:48:05 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-12 00:48:06 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 12 00:48:06.727: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-924 /apis/apps/v1/namespaces/deployment-924/replicasets/webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 7353088 3 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5ab6742c-09bd-4cd8-91ec-dfbe211667c6 0xc0039c3137 0xc0039c3138}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039c31b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:48:06.727: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 12 00:48:06.728: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-924 /apis/apps/v1/namespaces/deployment-924/replicasets/webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 7353089 3 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5ab6742c-09bd-4cd8-91ec-dfbe211667c6 0xc0039c2f37 0xc0039c2f38}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039c3058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 12 00:48:06.742: INFO: Pod "webserver-deployment-595b5b9587-45qsn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-45qsn webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-45qsn 69030e1d-fc4d-46a5-94d1-2e09c7952844 7352964 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc002892057 0xc002892058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.166,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://11ee0986f71e0c712bd72c39d7328f3e13f5e7ca29fd053c1bebfeb8b170508f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.166,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.742: INFO: Pod "webserver-deployment-595b5b9587-4qrxk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4qrxk webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-4qrxk a31d08f4-df7a-4af8-8b84-c8c34b1b9d20 7352888 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0028923a7 0xc0028923a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.178,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:47:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c4aead6189a483ec3aa60cf6600fa5179fe8407570e9270e50520628e2ef9fe9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.743: INFO: Pod "webserver-deployment-595b5b9587-5td65" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5td65 webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-5td65 7a6dddc9-971c-44e6-8abd-5a0664485265 7353061 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0028925b7 0xc0028925b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.743: INFO: Pod "webserver-deployment-595b5b9587-7cqdc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7cqdc webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-7cqdc 56c9723e-66b4-4165-bd20-72a75dd65334 7352958 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc002892727 0xc002892728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.165,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e8cbccadf381d0f9fcd37d42f05a7980f035c5025b8b14ff92093397b846bb57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.743: INFO: Pod "webserver-deployment-595b5b9587-7k2rm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7k2rm webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-7k2rm 2314313e-86c5-4cd1-bf13-e7cb5b848b47 7353062 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c37c7 0xc0039c37c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.744: INFO: Pod "webserver-deployment-595b5b9587-82w7x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-82w7x webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-82w7x f143ac07-058f-4d4c-8870-94859104a296 7353060 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c3967 0xc0039c3968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.744: INFO: Pod "webserver-deployment-595b5b9587-8d857" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8d857 webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-8d857 32762e57-ec85-48be-b546-28555935b0b1 7353064 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c3a97 0xc0039c3a98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.744: INFO: Pod "webserver-deployment-595b5b9587-hjtfx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hjtfx webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-hjtfx 19182402-28e3-44e7-961c-667a460010b3 7352932 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c3be7 0xc0039c3be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.163,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://54a6d659648b4dddc34ce0f402d7f6af819469fdf5e67aa5393a6b6d9b854769,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.745: INFO: Pod "webserver-deployment-595b5b9587-hz7vk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hz7vk webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-hz7vk bd5f4be4-b46e-4904-868e-c816b8a45199 7352900 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c3db7 0xc0039c3db8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.162,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:47:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://70281a0ed319c52ed9db8a47d1d63d5a8ab7dfdfc26a703ee5116a2523a20d4b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.745: INFO: Pod "webserver-deployment-595b5b9587-j559l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j559l webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-j559l e08b2db6-4b36-4728-8152-1a56b22bc09b 7353087 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc0039c3f57 0xc0039c3f58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.746: INFO: Pod "webserver-deployment-595b5b9587-jrfcw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrfcw webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-jrfcw 7b50ac7a-28e5-4d03-973b-2d0ab6c19d97 7353051 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4217 0xc004cb4218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.746: INFO: Pod "webserver-deployment-595b5b9587-jxm6d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jxm6d webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-jxm6d 3f213c94-401f-460d-a617-4b0d34a297d7 7353083 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb43c7 0xc004cb43c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.746: INFO: Pod "webserver-deployment-595b5b9587-lxx6c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lxx6c webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-lxx6c f002eeea-12d4-46c1-a7fa-303cc30bdbd3 7353103 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4587 0xc004cb4588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-12 00:48:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.746: INFO: Pod "webserver-deployment-595b5b9587-rs9r5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rs9r5 webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-rs9r5 6d0dceb3-7d5a-47b8-ba23-3635b0423a66 7352945 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4767 0xc004cb4768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.181,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://09fb17d5c98f5a4fc27a14971dcd3017c9d5f2616aaa57152043f839d3c5618e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.746: INFO: Pod "webserver-deployment-595b5b9587-tntb9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tntb9 webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-tntb9 f67bf39e-5d7b-4b65-9527-3fdfbe139ae7 7353085 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4927 0xc004cb4928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.747: INFO: Pod "webserver-deployment-595b5b9587-wksbg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wksbg webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-wksbg f2e35d9b-af68-4bf9-8ba6-f1cc2beed34f 7352941 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4a77 0xc004cb4a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.180,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8d5fae59bde89c4be757000a75448c6dc96fa79a8cdfd1307cd6ada881d4b47c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.747: INFO: Pod "webserver-deployment-595b5b9587-xczhn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xczhn webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-xczhn 5d4e01d4-3576-420d-b338-c619117969c2 7353084 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4c47 0xc004cb4c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.747: INFO: Pod "webserver-deployment-595b5b9587-zcg8w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zcg8w webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-zcg8w dc80ea00-af31-4506-9cb4-a9008a0185e7 7353096 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4d97 0xc004cb4d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-12 00:48:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.747: INFO: Pod "webserver-deployment-595b5b9587-zlr5q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zlr5q webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-zlr5q 0e40d906-08dd-4fab-8fae-2a3bf591cab6 7353086 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb4f07 0xc004cb4f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.748: INFO: Pod "webserver-deployment-595b5b9587-zm2j8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zm2j8 webserver-deployment-595b5b9587- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-595b5b9587-zm2j8 f826002e-87b8-4151-8a57-d7f4d3448f26 7352925 0 2020-04-12 00:47:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3ec9bf4-1cd7-4f74-aafd-4c23307c3b72 0xc004cb5027 0xc004cb5028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:47:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.179,StartTime:2020-04-12 00:47:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-12 00:48:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://53d07365792b1087376a1aa69c11262b6d36ea3c73fe44305fec77cababc48ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.749: INFO: Pod "webserver-deployment-c7997dcc8-77cgz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-77cgz webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-77cgz 2a2cb2eb-4638-428a-a75b-88f86903bc79 7353020 0 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb51a7 0xc004cb51a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-12 00:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.749: INFO: Pod "webserver-deployment-c7997dcc8-826mp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-826mp webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-826mp 98731593-a3b3-46ed-b7d6-e5d4286addb3 7353070 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb5377 0xc004cb5378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-9t6g8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9t6g8 webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-9t6g8 a431fc02-80b0-49d1-a3ce-fa80be64ce07 7352998 0 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb54d7 0xc004cb54d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-12 00:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-bcn4k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bcn4k webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-bcn4k df4b14eb-05b7-40ee-8c5e-75970fc5f943 7353079 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb56b7 0xc004cb56b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-bfftm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bfftm webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-bfftm 51a65bed-9520-4666-8997-c25aa61b152f 7353091 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb5827 0xc004cb5828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-c84dd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c84dd webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-c84dd 9b4b4f81-e95d-4e4a-8c3c-3d27b64a734f 7353001 0 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb59a7 0xc004cb59a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-12 00:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-dhl97" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dhl97 webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-dhl97 7c8bfb8b-5ea9-437a-b410-3df37bd7efbb 7353082 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb5bc7 0xc004cb5bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-g4rmp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g4rmp webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-g4rmp 2f81d612-63ae-4cb4-a157-884b228a0330 7353026 0 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb5d07 0xc004cb5d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-12 00:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.750: INFO: Pod "webserver-deployment-c7997dcc8-ggt7c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ggt7c webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-ggt7c 47921d53-5b7c-4f2a-ac11-a79af620a981 7353019 0 2020-04-12 00:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc004cb5e87 0xc004cb5e88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-12 00:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.751: INFO: Pod "webserver-deployment-c7997dcc8-jnwdt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jnwdt webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-jnwdt 4bca25e7-c44c-4ec3-9d75-c64b9dded031 7353055 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc002daa2a7 0xc002daa2a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.751: INFO: Pod "webserver-deployment-c7997dcc8-k5x7d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k5x7d webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-k5x7d 4a7d59e3-8038-4a4b-b01d-b98fa7c654ae 7353045 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc002daa5d7 0xc002daa5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.751: INFO: Pod "webserver-deployment-c7997dcc8-wbnj5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wbnj5 webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-wbnj5 d711fb62-8070-480c-af5e-f960519f2d2e 7353080 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc002daa767 0xc002daa768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 12 00:48:06.751: INFO: Pod "webserver-deployment-c7997dcc8-z6p4q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z6p4q webserver-deployment-c7997dcc8- deployment-924 /api/v1/namespaces/deployment-924/pods/webserver-deployment-c7997dcc8-z6p4q 92972cc9-d237-4fe7-85e9-97e015055d35 7353081 0 2020-04-12 00:48:06 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 13d76f95-0c31-4693-a4fd-e85cdd73577a 0xc002daa8e7 0xc002daa8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z428s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z428s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z428s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-12 00:48:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:06.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-924" for this suite. • [SLOW TEST:12.663 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":260,"skipped":4448,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:06.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-7e9a0556-a4fd-479c-879b-8ac2c31e6de3 STEP: Creating a pod to test consume secrets Apr 12 00:48:07.226: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104" in namespace "projected-1981" to be "Succeeded or Failed" Apr 12 00:48:07.229: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748254ms Apr 12 00:48:09.428: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201376612s Apr 12 00:48:11.553: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326548209s Apr 12 00:48:13.738: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511658979s Apr 12 00:48:15.777: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550320362s Apr 12 00:48:17.939: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712933405s Apr 12 00:48:19.967: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 12.740118963s Apr 12 00:48:22.153: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Pending", Reason="", readiness=false. Elapsed: 14.926526255s Apr 12 00:48:24.166: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Running", Reason="", readiness=true. Elapsed: 16.939822131s Apr 12 00:48:26.171: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Running", Reason="", readiness=true. Elapsed: 18.944399052s Apr 12 00:48:28.175: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.948347144s STEP: Saw pod success Apr 12 00:48:28.175: INFO: Pod "pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104" satisfied condition "Succeeded or Failed" Apr 12 00:48:28.178: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104 container projected-secret-volume-test: STEP: delete the pod Apr 12 00:48:28.221: INFO: Waiting for pod pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104 to disappear Apr 12 00:48:28.250: INFO: Pod pod-projected-secrets-ae641355-7f78-44b4-a651-351662207104 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:28.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1981" for this suite. • [SLOW TEST:21.313 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4456,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:28.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-e1f1a830-b9b8-4df9-aae4-b14f545e1c68 STEP: Creating a pod to test consume secrets Apr 12 00:48:28.373: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9" in namespace "projected-6101" to be "Succeeded or Failed" Apr 12 00:48:28.379: INFO: Pod "pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55244ms Apr 12 00:48:30.397: INFO: Pod "pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024207375s Apr 12 00:48:32.404: INFO: Pod "pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030818141s STEP: Saw pod success Apr 12 00:48:32.404: INFO: Pod "pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9" satisfied condition "Succeeded or Failed" Apr 12 00:48:32.407: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9 container projected-secret-volume-test: STEP: delete the pod Apr 12 00:48:32.439: INFO: Waiting for pod pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9 to disappear Apr 12 00:48:32.451: INFO: Pod pod-projected-secrets-374a9e1b-fdde-43a7-a54a-932d963cb3f9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:32.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6101" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4474,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:32.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-4da788c2-0d2f-494e-a959-b9f9da0d5cc0 STEP: Creating a pod to test consume configMaps Apr 12 00:48:32.688: INFO: Waiting up to 5m0s for pod "pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca" in namespace "configmap-835" to be "Succeeded or Failed" Apr 12 00:48:32.703: INFO: Pod "pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.470075ms Apr 12 00:48:34.707: INFO: Pod "pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018794227s Apr 12 00:48:36.711: INFO: Pod "pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023019125s STEP: Saw pod success Apr 12 00:48:36.711: INFO: Pod "pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca" satisfied condition "Succeeded or Failed" Apr 12 00:48:36.715: INFO: Trying to get logs from node latest-worker pod pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca container configmap-volume-test: STEP: delete the pod Apr 12 00:48:36.746: INFO: Waiting for pod pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca to disappear Apr 12 00:48:36.759: INFO: Pod pod-configmaps-18e52ffb-2fed-48c5-b88a-b3657b0457ca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:36.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-835" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:36.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:48:36.846: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1f8d129e-38d9-4c4e-8e90-c7b6ce34a02f" in namespace "security-context-test-1035" to be "Succeeded or Failed" Apr 12 00:48:36.853: INFO: Pod "busybox-user-65534-1f8d129e-38d9-4c4e-8e90-c7b6ce34a02f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.754383ms Apr 12 00:48:38.858: INFO: Pod "busybox-user-65534-1f8d129e-38d9-4c4e-8e90-c7b6ce34a02f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012615761s Apr 12 00:48:40.862: INFO: Pod "busybox-user-65534-1f8d129e-38d9-4c4e-8e90-c7b6ce34a02f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016651276s Apr 12 00:48:40.862: INFO: Pod "busybox-user-65534-1f8d129e-38d9-4c4e-8e90-c7b6ce34a02f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:40.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1035" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4521,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:40.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:48:40.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7" in namespace "downward-api-7657" to be "Succeeded or Failed" Apr 12 00:48:40.949: INFO: Pod "downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608603ms Apr 12 00:48:42.969: INFO: Pod "downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023791193s Apr 12 00:48:44.973: INFO: Pod "downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027350608s STEP: Saw pod success Apr 12 00:48:44.973: INFO: Pod "downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7" satisfied condition "Succeeded or Failed" Apr 12 00:48:44.975: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7 container client-container: STEP: delete the pod Apr 12 00:48:45.003: INFO: Waiting for pod downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7 to disappear Apr 12 00:48:45.015: INFO: Pod downwardapi-volume-68a01c76-a4df-4eb0-ac84-d76911d64cb7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:45.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7657" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:45.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0412 00:48:55.092554 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 12 00:48:55.092: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:55.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2280" for this suite. • [SLOW TEST:10.076 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":266,"skipped":4585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:55.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 12 00:48:59.206: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2152 PodName:pod-sharedvolume-0e931857-8382-483c-9a36-9fe715efd236 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 12 00:48:59.206: INFO: >>> kubeConfig: /root/.kube/config I0412 00:48:59.240908 7 log.go:172] (0xc002c9e630) (0xc001e23540) Create stream I0412 00:48:59.240944 7 log.go:172] (0xc002c9e630) (0xc001e23540) Stream added, broadcasting: 1 I0412 00:48:59.242893 7 log.go:172] (0xc002c9e630) Reply frame received for 1 I0412 00:48:59.242941 7 log.go:172] (0xc002c9e630) (0xc000d38c80) Create stream I0412 00:48:59.242958 7 log.go:172] (0xc002c9e630) (0xc000d38c80) Stream added, broadcasting: 3 I0412 00:48:59.243905 7 log.go:172] (0xc002c9e630) Reply frame received for 3 I0412 00:48:59.243948 7 log.go:172] (0xc002c9e630) (0xc001b5d5e0) Create stream I0412 00:48:59.243969 7 log.go:172] (0xc002c9e630) (0xc001b5d5e0) Stream added, broadcasting: 5 I0412 00:48:59.244810 7 log.go:172] (0xc002c9e630) Reply frame received for 5 I0412 00:48:59.296057 7 log.go:172] (0xc002c9e630) Data frame received for 5 I0412 00:48:59.296094 7 log.go:172] (0xc001b5d5e0) (5) Data frame handling I0412 00:48:59.296136 7 log.go:172] (0xc002c9e630) Data frame received for 3 I0412 00:48:59.296184 7 log.go:172] (0xc000d38c80) (3) Data frame handling I0412 00:48:59.296212 7 log.go:172] (0xc000d38c80) (3) Data frame sent I0412 00:48:59.296236 7 log.go:172] (0xc002c9e630) Data frame received for 3 I0412 00:48:59.296250 7 log.go:172] (0xc000d38c80) (3) Data frame handling I0412 00:48:59.297781 7 log.go:172] (0xc002c9e630) Data frame received for 1 I0412 00:48:59.297796 7 log.go:172] (0xc001e23540) (1) Data frame handling I0412 00:48:59.297810 7 log.go:172] (0xc001e23540) (1) Data frame sent I0412 00:48:59.297819 7 log.go:172] (0xc002c9e630) (0xc001e23540) Stream removed, broadcasting: 1 I0412 00:48:59.297905 7 log.go:172] (0xc002c9e630) (0xc001e23540) Stream removed, broadcasting: 1 I0412 00:48:59.297930 7 log.go:172] (0xc002c9e630) (0xc000d38c80) Stream removed, broadcasting: 3 I0412 00:48:59.297962 7 log.go:172] (0xc002c9e630) Go away received I0412 00:48:59.298006 7 log.go:172] (0xc002c9e630) (0xc001b5d5e0) Stream removed, broadcasting: 5 Apr 12 00:48:59.298: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:48:59.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2152" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":267,"skipped":4608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:48:59.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:03.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9173" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4634,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:03.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:49:03.424: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:07.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9258" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4636,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:07.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 12 00:49:13.641: INFO: DNS probes using dns-7460/dns-test-72b63a5a-83aa-4b82-9522-3339bec54337 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:13.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7460" for this suite. • [SLOW TEST:6.230 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":270,"skipped":4652,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:13.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 12 00:49:13.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6" in namespace "downward-api-716" to be "Succeeded or Failed" Apr 12 00:49:14.084: INFO: Pod "downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6": Phase="Pending", Reason="", readiness=false. Elapsed: 201.330276ms Apr 12 00:49:16.215: INFO: Pod "downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332338858s Apr 12 00:49:18.220: INFO: Pod "downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336794244s STEP: Saw pod success Apr 12 00:49:18.220: INFO: Pod "downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6" satisfied condition "Succeeded or Failed" Apr 12 00:49:18.223: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6 container client-container: STEP: delete the pod Apr 12 00:49:18.256: INFO: Waiting for pod downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6 to disappear Apr 12 00:49:18.267: INFO: Pod downwardapi-volume-d95b4658-184c-493b-abb9-0d224d3223c6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:18.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-716" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:18.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 12 00:49:18.337: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605" in namespace "security-context-test-7495" to be "Succeeded or Failed" Apr 12 00:49:18.357: INFO: Pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605": Phase="Pending", Reason="", readiness=false. Elapsed: 20.144203ms Apr 12 00:49:20.361: INFO: Pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023978316s Apr 12 00:49:22.364: INFO: Pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027138103s Apr 12 00:49:22.364: INFO: Pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605" satisfied condition "Succeeded or Failed" Apr 12 00:49:22.370: INFO: Got logs for pod "busybox-privileged-false-237ae9d7-c493-4a24-82c4-f50dd6138605": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7495" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4686,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:22.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-fd6e81bc-f464-4345-a1cb-4e7a9b857bfa STEP: Creating configMap with name cm-test-opt-upd-a1cbed24-479a-44f6-808c-bef77a0e1205 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fd6e81bc-f464-4345-a1cb-4e7a9b857bfa STEP: Updating configmap cm-test-opt-upd-a1cbed24-479a-44f6-808c-bef77a0e1205 STEP: Creating configMap with name cm-test-opt-create-dcfebcc0-3226-4ee6-9d37-17c35006fe59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:30.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7013" for this suite. • [SLOW TEST:8.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4692,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:30.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-484c7c78-ff14-48db-85dd-4046189410ac STEP: Creating a pod to test consume secrets Apr 12 00:49:30.774: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e" in namespace "projected-8901" to be "Succeeded or Failed" Apr 12 00:49:30.783: INFO: Pod "pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.983003ms Apr 12 00:49:32.787: INFO: Pod "pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012889474s Apr 12 00:49:34.791: INFO: Pod "pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017219641s STEP: Saw pod success Apr 12 00:49:34.791: INFO: Pod "pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e" satisfied condition "Succeeded or Failed" Apr 12 00:49:34.794: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e container projected-secret-volume-test: STEP: delete the pod Apr 12 00:49:34.832: INFO: Waiting for pod pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e to disappear Apr 12 00:49:34.873: INFO: Pod pod-projected-secrets-4412c8b1-0746-4b21-8ef2-f06cb6140e3e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:34.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8901" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4702,"failed":0} SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 12 00:49:35.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 12 00:49:35.097: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5405" to be "Succeeded or Failed" Apr 12 00:49:35.100: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.295027ms Apr 12 00:49:37.244: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147470666s Apr 12 00:49:39.248: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151391136s Apr 12 00:49:41.252: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155581525s STEP: Saw pod success Apr 12 00:49:41.252: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 12 00:49:41.256: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 12 00:49:41.287: INFO: Waiting for pod pod-host-path-test to disappear Apr 12 00:49:41.298: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 12 00:49:41.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5405" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4708,"failed":0} SSSSSSSSSApr 12 00:49:41.307: INFO: Running AfterSuite actions on all nodes Apr 12 00:49:41.307: INFO: Running AfterSuite actions on node 1 Apr 12 00:49:41.307: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4377.536 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS