I0719 11:25:10.551348 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0719 11:25:10.740629 6 e2e.go:109] Starting e2e run "c184c73d-dc9a-4cfd-b3db-a9a480a9bd38" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595157909 - Will randomize all specs Will run 278 of 4843 specs Jul 19 11:25:10.798: INFO: >>> kubeConfig: /root/.kube/config Jul 19 11:25:10.801: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 19 11:25:10.817: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 19 11:25:10.845: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 19 11:25:10.845: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 19 11:25:10.845: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 19 11:25:10.855: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 19 11:25:10.855: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 19 11:25:10.855: INFO: e2e test version: v1.17.8 Jul 19 11:25:10.856: INFO: kube-apiserver version: v1.17.5 Jul 19 11:25:10.856: INFO: >>> kubeConfig: /root/.kube/config Jul 19 11:25:10.862: INFO: Cluster IP family: ipv4 [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:25:10.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Jul 19 11:25:10.976: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-g5g6 STEP: Creating a pod to test atomic-volume-subpath Jul 19 11:25:11.001: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g5g6" in namespace "subpath-3615" to be "success or failure" Jul 19 11:25:11.006: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662815ms Jul 19 11:25:13.010: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008830869s Jul 19 11:25:15.014: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012923689s Jul 19 11:25:17.019: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 6.01740285s Jul 19 11:25:19.022: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 8.020790922s Jul 19 11:25:21.027: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 10.025420158s Jul 19 11:25:23.154: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 12.153177066s Jul 19 11:25:25.159: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 14.157593085s Jul 19 11:25:27.172: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 16.170895469s Jul 19 11:25:29.176: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 18.1743657s Jul 19 11:25:31.179: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 20.177835674s Jul 19 11:25:33.184: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Running", Reason="", readiness=true. Elapsed: 22.18295592s Jul 19 11:25:35.191: INFO: Pod "pod-subpath-test-projected-g5g6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.189434666s STEP: Saw pod success Jul 19 11:25:35.191: INFO: Pod "pod-subpath-test-projected-g5g6" satisfied condition "success or failure" Jul 19 11:25:35.194: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-g5g6 container test-container-subpath-projected-g5g6: STEP: delete the pod Jul 19 11:25:35.232: INFO: Waiting for pod pod-subpath-test-projected-g5g6 to disappear Jul 19 11:25:35.242: INFO: Pod pod-subpath-test-projected-g5g6 no longer exists STEP: Deleting pod pod-subpath-test-projected-g5g6 Jul 19 11:25:35.242: INFO: Deleting pod "pod-subpath-test-projected-g5g6" in namespace "subpath-3615" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:25:35.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3615" for this suite. • [SLOW TEST:24.388 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:25:35.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:25:36.348: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:25:38.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730754736, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730754736, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730754736, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730754736, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:25:41.421: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:25:41.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6638" for this suite. STEP: Destroying namespace "webhook-6638-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.399 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":2,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:25:41.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8151.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 11:25:49.781: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.784: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.787: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.790: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.798: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.801: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.803: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.806: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:49.813: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:25:54.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:54.820: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:54.822: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:54.825: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:55.254: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: Get https://172.30.12.66:45705/api/v1/namespaces/dns-8151/pods/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34/proxy/results/wheezy_udp@PodARecord: stream error: stream ID 187; INTERNAL_ERROR Jul 19 11:25:55.316: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:55.319: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:55.322: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:55.325: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:25:55.337: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:26:00.007: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.053: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.077: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.199: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.262: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.378: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.450: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.624: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:00.636: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:26:04.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.822: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.825: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.828: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.837: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.840: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.843: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.846: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:04.851: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:26:09.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.821: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.824: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.827: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.836: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.839: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.841: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.843: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:09.849: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:26:14.844: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.848: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.850: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.853: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.861: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.864: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.866: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.874: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local from pod dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34: the server could not find the requested resource (get pods dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34) Jul 19 11:26:14.879: INFO: Lookups using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8151.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8151.svc.cluster.local jessie_udp@dns-test-service-2.dns-8151.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8151.svc.cluster.local] Jul 19 11:26:19.935: INFO: DNS probes using dns-8151/dns-test-e4598e66-a354-4f50-a5ad-cbce59d40c34 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:26:20.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8151" for this suite. • [SLOW TEST:39.067 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:26:20.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:26:21.142: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6" in namespace "security-context-test-3313" to be "success or failure" Jul 19 11:26:21.184: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.509148ms Jul 19 11:26:23.318: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175976791s Jul 19 11:26:25.545: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403189725s Jul 19 11:26:27.581: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43952401s Jul 19 11:26:29.586: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.44441645s Jul 19 11:26:29.586: INFO: Pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6" satisfied condition "success or failure" Jul 19 11:26:29.592: INFO: Got logs for pod "busybox-privileged-false-2bab54e2-646f-4182-95fd-d18759500cf6": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:26:29.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3313" for this suite. • [SLOW TEST:8.882 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:26:29.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 19 11:26:29.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6010' Jul 19 11:26:35.495: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 19 11:26:35.495: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Jul 19 11:26:39.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6010' Jul 19 11:26:40.372: INFO: stderr: "" Jul 19 11:26:40.372: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:26:40.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6010" for this suite. • [SLOW TEST:10.798 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":5,"skipped":89,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:26:40.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 19 11:26:48.184: INFO: 9 pods remaining Jul 19 11:26:48.184: INFO: 0 pods has nil DeletionTimestamp Jul 19 11:26:48.184: INFO: Jul 19 11:26:50.199: INFO: 0 pods remaining Jul 19 11:26:50.199: INFO: 0 pods has nil DeletionTimestamp Jul 19 11:26:50.199: INFO: STEP: Gathering metrics W0719 11:26:51.612645 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 11:26:51.612: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:26:51.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4585" for this suite. • [SLOW TEST:12.099 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":6,"skipped":97,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:26:52.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0719 11:26:54.075836 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 11:26:54.075: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:26:54.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9901" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":7,"skipped":105,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:26:54.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 19 11:26:54.114: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:27:04.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4916" for this suite. • [SLOW TEST:11.337 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":8,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:27:05.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:27:06.299: INFO: Creating deployment "webserver-deployment" Jul 19 11:27:06.384: INFO: Waiting for observed generation 1 Jul 19 11:27:08.551: INFO: Waiting for all required pods to come up Jul 19 11:27:08.555: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 19 11:27:26.614: INFO: Waiting for deployment "webserver-deployment" to complete Jul 19 11:27:26.618: INFO: Updating deployment "webserver-deployment" with a non-existent image Jul 19 11:27:26.622: INFO: Updating deployment webserver-deployment Jul 19 11:27:26.622: INFO: Waiting for observed generation 2 Jul 19 11:27:29.259: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 19 11:27:29.888: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 19 11:27:30.468: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 19 11:27:31.499: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 19 11:27:31.499: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 19 11:27:31.504: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 19 11:27:31.508: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jul 19 11:27:31.508: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jul 19 11:27:31.512: INFO: Updating deployment webserver-deployment Jul 19 11:27:31.512: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jul 19 11:27:32.080: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 19 11:27:32.446: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 19 11:27:33.426: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5678 /apis/apps/v1/namespaces/deployment-5678/deployments/webserver-deployment 2b8cf851-246b-442c-be8a-fe235fd5e920 2402907 3 2020-07-19 11:27:06 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001e5d248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-07-19 11:27:29 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-19 11:27:32 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jul 19 11:27:33.607: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5678 /apis/apps/v1/namespaces/deployment-5678/replicasets/webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 2402951 3 2020-07-19 11:27:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2b8cf851-246b-442c-be8a-fe235fd5e920 0xc001c1ea37 0xc001c1ea38}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c1eaa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:27:33.607: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jul 19 11:27:33.607: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5678 /apis/apps/v1/namespaces/deployment-5678/replicasets/webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 2402952 3 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2b8cf851-246b-442c-be8a-fe235fd5e920 0xc001c1e977 0xc001c1e978}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c1e9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:27:33.808: INFO: Pod "webserver-deployment-595b5b9587-44jft" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-44jft webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-44jft 6dad830d-c0a9-409c-9dd6-7c4fa7fe9c7e 2402754 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1ef47 0xc001c1ef48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.102,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e238473abe44cfd3416aafe3dc91166e418f4fb0bc219e107450f1483e3d0765,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.809: INFO: Pod "webserver-deployment-595b5b9587-5mthn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5mthn webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-5mthn d82376f7-2b97-48b9-87c4-d155cca7bc7d 2402945 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f0c7 0xc001c1f0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.809: INFO: Pod "webserver-deployment-595b5b9587-6ngbj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ngbj webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-6ngbj a528bbc2-d59a-4a53-b5db-b90404ceb8c7 2402767 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f207 0xc001c1f208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.134,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://510a9d730a2d0eb41c377823ea86ffd513278effb39c7d8c2a14b77725987b10,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.809: INFO: Pod "webserver-deployment-595b5b9587-82xpx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-82xpx webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-82xpx 23db32c9-13f5-4f73-aed7-14a81b46f598 2402802 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f3a7 0xc001c1f3a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.104,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca54ffff7842802738fe1d1fd279c3b044f14de39a65e7289e5aa00b42ac566f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.810: INFO: Pod "webserver-deployment-595b5b9587-9v6lt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9v6lt webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-9v6lt 764a5c6b-15d7-4722-85a6-7d2bdd31386f 2402948 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f527 0xc001c1f528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.810: INFO: Pod "webserver-deployment-595b5b9587-bxxk8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bxxk8 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-bxxk8 57cf3d35-bc6f-43f6-81f1-d334fd51f63e 2402930 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f647 0xc001c1f648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.810: INFO: Pod "webserver-deployment-595b5b9587-fx78j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fx78j webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-fx78j 2a5bc5cd-592b-4a37-9471-0b5234c28f76 2402780 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f767 0xc001c1f768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.103,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ddce55f8bfa626f3f4cb6b0474eb365263160deb71a024e8a4941d27666a5d14,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.810: INFO: Pod "webserver-deployment-595b5b9587-gbgdc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gbgdc webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-gbgdc 5286ee7d-be40-4dc5-9550-b044f9b16989 2402809 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1f8e7 0xc001c1f8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.136,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9c54fea6546bb5b2fd616618d2cc0e074521b8e2f1dd5bda192cd96e0eedceb8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.810: INFO: Pod "webserver-deployment-595b5b9587-gp6cw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gp6cw webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-gp6cw 86e27d1b-6f6c-40ea-8ea2-86d17affb49a 2402931 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1fa67 0xc001c1fa68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-ll5b6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ll5b6 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-ll5b6 7d195e64-5264-4050-94b3-3c81d429ab13 2402946 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1fb87 0xc001c1fb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-nkzff" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nkzff webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-nkzff f30e70de-b706-481e-a3ea-56fd80d193e7 2402929 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1fca7 0xc001c1fca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-nlxl6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nlxl6 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-nlxl6 cfdb74d1-948a-4281-acf0-cb1f0c9ca838 2402912 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1fdc7 0xc001c1fdc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-nsvc4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsvc4 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-nsvc4 3bf326f0-f48d-43fe-96da-1e9869ce2b50 2402814 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc001c1fee7 0xc001c1fee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.135,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b9ac4c472ccd484b903d5f854d4b7d2b820d9b4a8f93f474e622a68da4e8c375,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-ntdhf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ntdhf webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-ntdhf 4650e501-016e-4b68-900b-95553371c9df 2402960 0 2020-07-19 11:27:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc4067 0xc002dc4068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-19 11:27:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.811: INFO: Pod "webserver-deployment-595b5b9587-nxj7b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nxj7b webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-nxj7b 55fccaf3-341f-4376-930e-ce36d7879e17 2402932 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc41c7 0xc002dc41c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.812: INFO: Pod "webserver-deployment-595b5b9587-pgmlr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pgmlr webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-pgmlr 8f00235b-95a4-4b9b-89be-ec1b98ae5204 2402949 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc42e7 0xc002dc42e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.812: INFO: Pod "webserver-deployment-595b5b9587-pjc26" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pjc26 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-pjc26 0c9a86de-c23e-40ca-9a0e-e99be775c141 2402793 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc4407 0xc002dc4408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.105,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7809aca4a9c57aa865b232dd67f75a533f2901342495f9671ea8ba568f0cb114,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.812: INFO: Pod "webserver-deployment-595b5b9587-t6jx7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6jx7 webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-t6jx7 a4cb9d39-bc1c-425b-90f5-02a6290743bb 2402739 0 2020-07-19 11:27:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc4587 0xc002dc4588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.101,StartTime:2020-07-19 11:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:27:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0d3adc6822f1ac8f8028c8f6f87bb36ead1afbf0fc9d3957f8a37cec74d97a10,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.812: INFO: Pod "webserver-deployment-595b5b9587-tk5ms" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tk5ms webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-tk5ms a68cccb9-c180-4851-877a-fdd0a5c956d4 2402941 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc4707 0xc002dc4708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.813: INFO: Pod "webserver-deployment-595b5b9587-wrp4q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wrp4q webserver-deployment-595b5b9587- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-595b5b9587-wrp4q 99740c82-1083-4d04-b242-1d8f6f4a433d 2402958 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 418b5ac6-9a50-4627-9e19-169271cb9335 0xc002dc4827 0xc002dc4828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:27:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.813: INFO: Pod "webserver-deployment-c7997dcc8-5v4tq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5v4tq webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-5v4tq fea3b3cc-2639-4441-b24d-2f2cb256d7a5 2402968 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc4987 0xc002dc4988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.813: INFO: Pod "webserver-deployment-c7997dcc8-6v7m5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6v7m5 webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-6v7m5 18e09d3f-4ff7-46a6-8ba5-a4a17130d700 2402939 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc4b07 0xc002dc4b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.813: INFO: Pod "webserver-deployment-c7997dcc8-8wd4l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8wd4l webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-8wd4l 751d92de-6188-4f79-880e-8a234c73199d 2402883 0 2020-07-19 11:27:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc4c37 0xc002dc4c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-19 11:27:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.814: INFO: Pod "webserver-deployment-c7997dcc8-b7rwg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7rwg webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-b7rwg 4fd1020e-a0e1-4772-bfe8-904a27f5f3b9 2402884 0 2020-07-19 11:27:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc4dc7 0xc002dc4dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:27:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.814: INFO: Pod "webserver-deployment-c7997dcc8-f4b8q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f4b8q webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-f4b8q fc36a4e6-5ee0-4fd0-b17b-c3f0bd919d9a 2402927 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc4f47 0xc002dc4f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.814: INFO: Pod "webserver-deployment-c7997dcc8-g2ftl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g2ftl webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-g2ftl e1c952c4-e281-42db-a722-38fb27958396 2402859 0 2020-07-19 11:27:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc5077 0xc002dc5078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-19 11:27:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.814: INFO: Pod "webserver-deployment-c7997dcc8-kl8c6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kl8c6 webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-kl8c6 ac6e54b7-5252-409b-80f7-f11592e4f6c6 2402938 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc51f7 0xc002dc51f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-nprb5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nprb5 webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-nprb5 26b5ba0a-7062-4c1d-9463-c114f25aa218 2402969 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc5327 0xc002dc5328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-19 11:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-sbz2v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sbz2v webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-sbz2v 8c71d64d-f3e9-4ee4-a47d-e901ce380507 2402940 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc54a7 0xc002dc54a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-tvlt6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tvlt6 webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-tvlt6 5fbf82be-1007-4df5-9a8c-482b407daa09 2402947 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc55d7 0xc002dc55d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-tvv2z" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tvv2z webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-tvv2z 19781b72-f24e-4f41-bed6-1961e268bff4 2402871 0 2020-07-19 11:27:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc5707 0xc002dc5708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:27:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-z5dc5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z5dc5 webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-z5dc5 dfcd3fa8-bf01-40e2-978d-a58026c8786a 2402852 0 2020-07-19 11:27:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc5887 0xc002dc5888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:27:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 19 11:27:33.815: INFO: Pod "webserver-deployment-c7997dcc8-zz4zw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zz4zw webserver-deployment-c7997dcc8- deployment-5678 /api/v1/namespaces/deployment-5678/pods/webserver-deployment-c7997dcc8-zz4zw 0d93ba94-c295-420b-952c-c277d5beaf05 2402955 0 2020-07-19 11:27:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b50b8942-1513-44fe-aacb-0911eea97bbc 0xc002dc5a07 0xc002dc5a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5znf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5znf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5znf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:27:33.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5678" for this suite. • [SLOW TEST:29.447 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":9,"skipped":166,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:27:34.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:27:39.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290" in namespace "projected-3059" to be "success or failure" Jul 19 11:27:39.978: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 711.66936ms Jul 19 11:27:42.223: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.956601767s Jul 19 11:27:44.780: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 5.513535573s Jul 19 11:27:46.797: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 7.530990257s Jul 19 11:27:50.176: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 10.910054328s Jul 19 11:27:52.381: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114992317s Jul 19 11:27:55.045: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 15.778555704s Jul 19 11:27:57.229: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 17.962537551s Jul 19 11:27:59.738: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Pending", Reason="", readiness=false. Elapsed: 20.471800045s Jul 19 11:28:02.110: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.843616411s STEP: Saw pod success Jul 19 11:28:02.110: INFO: Pod "downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290" satisfied condition "success or failure" Jul 19 11:28:02.121: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290 container client-container: STEP: delete the pod Jul 19 11:28:02.810: INFO: Waiting for pod downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290 to disappear Jul 19 11:28:02.815: INFO: Pod downwardapi-volume-cdaf89cb-01bf-46eb-bd2a-0c62d7d6f290 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:28:02.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3059" for this suite. • [SLOW TEST:28.433 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":176,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:28:03.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:28:22.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4844" for this suite. • [SLOW TEST:19.649 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":11,"skipped":179,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:28:22.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jul 19 11:28:35.039: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 19 11:28:50.209: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:28:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6881" for this suite. • [SLOW TEST:27.380 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":12,"skipped":188,"failed":0} [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:28:50.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-581cf391-ae3b-4778-8529-64890f2a8f7a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-581cf391-ae3b-4778-8529-64890f2a8f7a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:30:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1533" for this suite. • [SLOW TEST:84.208 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":188,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:30:14.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:30:15.054: INFO: Creating deployment "test-recreate-deployment" Jul 19 11:30:15.525: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 19 11:30:15.550: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 19 11:30:17.555: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 19 11:30:17.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:30:19.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:30:21.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755015, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:30:23.562: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 19 11:30:23.567: INFO: Updating deployment test-recreate-deployment Jul 19 11:30:23.567: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 19 11:30:24.840: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3915 /apis/apps/v1/namespaces/deployment-3915/deployments/test-recreate-deployment 0f78454b-cc64-499b-b978-bebda2bae019 2404189 2 2020-07-19 11:30:15 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032ce578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-19 11:30:24 +0000 UTC,LastTransitionTime:2020-07-19 11:30:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-19 11:30:24 +0000 UTC,LastTransitionTime:2020-07-19 11:30:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 19 11:30:24.842: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3915 /apis/apps/v1/namespaces/deployment-3915/replicasets/test-recreate-deployment-5f94c574ff d593a894-7d4a-45e5-9e68-b1a7b6be61f9 2404185 1 2020-07-19 11:30:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0f78454b-cc64-499b-b978-bebda2bae019 0xc0032ce8f7 0xc0032ce8f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032ce958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:30:24.842: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 19 11:30:24.842: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3915 /apis/apps/v1/namespaces/deployment-3915/replicasets/test-recreate-deployment-799c574856 9309e74a-cb5f-481f-baa9-b00f33248434 2404173 2 2020-07-19 11:30:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0f78454b-cc64-499b-b978-bebda2bae019 0xc0032ce9c7 0xc0032ce9c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032cea38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:30:24.845: INFO: Pod "test-recreate-deployment-5f94c574ff-c9zhv" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-c9zhv test-recreate-deployment-5f94c574ff- deployment-3915 /api/v1/namespaces/deployment-3915/pods/test-recreate-deployment-5f94c574ff-c9zhv 31189295-30db-48bf-8aee-4e75091d4284 2404190 0 2020-07-19 11:30:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff d593a894-7d4a-45e5-9e68-b1a7b6be61f9 0xc0032cee77 0xc0032cee78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qm6ml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qm6ml,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qm6ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:30:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:30:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:30:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-19 11:30:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:30:24.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3915" for this suite. • [SLOW TEST:10.308 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":14,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:30:24.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2250 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 19 11:30:25.580: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 19 11:31:00.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostname&protocol=udp&host=10.244.2.159&port=8081&tries=1'] Namespace:pod-network-test-2250 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 11:31:00.156: INFO: >>> kubeConfig: /root/.kube/config I0719 11:31:00.183367 6 log.go:172] (0xc0019040b0) (0xc002892320) Create stream I0719 11:31:00.183402 6 log.go:172] (0xc0019040b0) (0xc002892320) Stream added, broadcasting: 1 I0719 11:31:00.185365 6 log.go:172] (0xc0019040b0) Reply frame received for 1 I0719 11:31:00.185406 6 log.go:172] (0xc0019040b0) (0xc001dfe000) Create stream I0719 11:31:00.185418 6 log.go:172] (0xc0019040b0) (0xc001dfe000) Stream added, broadcasting: 3 I0719 11:31:00.186521 6 log.go:172] (0xc0019040b0) Reply frame received for 3 I0719 11:31:00.186563 6 log.go:172] (0xc0019040b0) (0xc00281a0a0) Create stream I0719 11:31:00.186577 6 log.go:172] (0xc0019040b0) (0xc00281a0a0) Stream added, broadcasting: 5 I0719 11:31:00.187488 6 log.go:172] (0xc0019040b0) Reply frame received for 5 I0719 11:31:00.245564 6 log.go:172] (0xc0019040b0) Data frame received for 3 I0719 11:31:00.245585 6 log.go:172] (0xc001dfe000) (3) Data frame handling I0719 11:31:00.245597 6 log.go:172] (0xc001dfe000) (3) Data frame sent I0719 11:31:00.246476 6 log.go:172] (0xc0019040b0) Data frame received for 5 I0719 11:31:00.246495 6 log.go:172] (0xc00281a0a0) (5) Data frame handling I0719 11:31:00.246538 6 log.go:172] (0xc0019040b0) Data frame received for 3 I0719 11:31:00.246565 6 log.go:172] (0xc001dfe000) (3) Data frame handling I0719 11:31:00.248384 6 log.go:172] (0xc0019040b0) Data frame received for 1 I0719 11:31:00.248410 6 log.go:172] (0xc002892320) (1) Data frame handling I0719 11:31:00.248435 6 log.go:172] (0xc002892320) (1) Data frame sent I0719 11:31:00.248463 6 log.go:172] (0xc0019040b0) (0xc002892320) Stream removed, broadcasting: 1 I0719 11:31:00.248687 6 log.go:172] (0xc0019040b0) Go away received I0719 11:31:00.248919 6 log.go:172] (0xc0019040b0) (0xc002892320) Stream removed, broadcasting: 1 I0719 11:31:00.248944 6 log.go:172] (0xc0019040b0) (0xc001dfe000) Stream removed, broadcasting: 3 I0719 11:31:00.248960 6 log.go:172] (0xc0019040b0) (0xc00281a0a0) Stream removed, broadcasting: 5 Jul 19 11:31:00.248: INFO: Waiting for responses: map[] Jul 19 11:31:00.252: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostname&protocol=udp&host=10.244.1.121&port=8081&tries=1'] Namespace:pod-network-test-2250 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 11:31:00.252: INFO: >>> kubeConfig: /root/.kube/config I0719 11:31:00.281412 6 log.go:172] (0xc000bc04d0) (0xc001dfe460) Create stream I0719 11:31:00.281435 6 log.go:172] (0xc000bc04d0) (0xc001dfe460) Stream added, broadcasting: 1 I0719 11:31:00.283124 6 log.go:172] (0xc000bc04d0) Reply frame received for 1 I0719 11:31:00.283164 6 log.go:172] (0xc000bc04d0) (0xc001dfe500) Create stream I0719 11:31:00.283178 6 log.go:172] (0xc000bc04d0) (0xc001dfe500) Stream added, broadcasting: 3 I0719 11:31:00.284087 6 log.go:172] (0xc000bc04d0) Reply frame received for 3 I0719 11:31:00.284138 6 log.go:172] (0xc000bc04d0) (0xc002302000) Create stream I0719 11:31:00.284157 6 log.go:172] (0xc000bc04d0) (0xc002302000) Stream added, broadcasting: 5 I0719 11:31:00.285247 6 log.go:172] (0xc000bc04d0) Reply frame received for 5 I0719 11:31:00.342078 6 log.go:172] (0xc000bc04d0) Data frame received for 3 I0719 11:31:00.342115 6 log.go:172] (0xc001dfe500) (3) Data frame handling I0719 11:31:00.342196 6 log.go:172] (0xc001dfe500) (3) Data frame sent I0719 11:31:00.342703 6 log.go:172] (0xc000bc04d0) Data frame received for 5 I0719 11:31:00.342736 6 log.go:172] (0xc002302000) (5) Data frame handling I0719 11:31:00.342767 6 log.go:172] (0xc000bc04d0) Data frame received for 3 I0719 11:31:00.342778 6 log.go:172] (0xc001dfe500) (3) Data frame handling I0719 11:31:00.344211 6 log.go:172] (0xc000bc04d0) Data frame received for 1 I0719 11:31:00.344241 6 log.go:172] (0xc001dfe460) (1) Data frame handling I0719 11:31:00.344264 6 log.go:172] (0xc001dfe460) (1) Data frame sent I0719 11:31:00.344286 6 log.go:172] (0xc000bc04d0) (0xc001dfe460) Stream removed, broadcasting: 1 I0719 11:31:00.344313 6 log.go:172] (0xc000bc04d0) Go away received I0719 11:31:00.344408 6 log.go:172] (0xc000bc04d0) (0xc001dfe460) Stream removed, broadcasting: 1 I0719 11:31:00.344428 6 log.go:172] (0xc000bc04d0) (0xc001dfe500) Stream removed, broadcasting: 3 I0719 11:31:00.344436 6 log.go:172] (0xc000bc04d0) (0xc002302000) Stream removed, broadcasting: 5 Jul 19 11:31:00.344: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:31:00.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2250" for this suite. • [SLOW TEST:35.709 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":249,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:31:00.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0719 11:31:32.680210 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 11:31:32.680: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:31:32.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7417" for this suite. • [SLOW TEST:32.126 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":16,"skipped":260,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:31:32.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 19 11:31:33.188: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:31:47.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7529" for this suite. • [SLOW TEST:14.845 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":263,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:31:47.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:31:47.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8" in namespace "projected-5379" to be "success or failure" Jul 19 11:31:47.672: INFO: Pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.017395ms Jul 19 11:31:49.723: INFO: Pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069178664s Jul 19 11:31:51.727: INFO: Pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8": Phase="Running", Reason="", readiness=true. Elapsed: 4.073414918s Jul 19 11:31:53.903: INFO: Pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249259127s STEP: Saw pod success Jul 19 11:31:53.903: INFO: Pod "downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8" satisfied condition "success or failure" Jul 19 11:31:53.906: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8 container client-container: STEP: delete the pod Jul 19 11:31:54.241: INFO: Waiting for pod downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8 to disappear Jul 19 11:31:54.271: INFO: Pod downwardapi-volume-d9f3a60c-e13a-4102-9a8e-416324a61fa8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:31:54.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5379" for this suite. • [SLOW TEST:6.763 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:31:54.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jul 19 11:31:54.552: INFO: Waiting up to 5m0s for pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a" in namespace "emptydir-655" to be "success or failure" Jul 19 11:31:54.795: INFO: Pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a": Phase="Pending", Reason="", readiness=false. Elapsed: 242.928265ms Jul 19 11:31:56.922: INFO: Pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369277188s Jul 19 11:31:59.035: INFO: Pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a": Phase="Running", Reason="", readiness=true. Elapsed: 4.4820616s Jul 19 11:32:01.037: INFO: Pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484883477s STEP: Saw pod success Jul 19 11:32:01.037: INFO: Pod "pod-a0c219c9-9e18-442c-9364-66bb77aa753a" satisfied condition "success or failure" Jul 19 11:32:01.039: INFO: Trying to get logs from node jerma-worker2 pod pod-a0c219c9-9e18-442c-9364-66bb77aa753a container test-container: STEP: delete the pod Jul 19 11:32:01.274: INFO: Waiting for pod pod-a0c219c9-9e18-442c-9364-66bb77aa753a to disappear Jul 19 11:32:01.424: INFO: Pod pod-a0c219c9-9e18-442c-9364-66bb77aa753a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:01.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-655" for this suite. • [SLOW TEST:7.175 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:01.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-c590ce4a-3231-4a10-bc51-76f338d7f6a2 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:01.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5755" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":20,"skipped":311,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:01.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:32:01.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88" in namespace "projected-6233" to be "success or failure" Jul 19 11:32:01.778: INFO: Pod "downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88": Phase="Pending", Reason="", readiness=false. Elapsed: 32.379546ms Jul 19 11:32:03.957: INFO: Pod "downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211151873s Jul 19 11:32:05.961: INFO: Pod "downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214786051s STEP: Saw pod success Jul 19 11:32:05.961: INFO: Pod "downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88" satisfied condition "success or failure" Jul 19 11:32:05.963: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88 container client-container: STEP: delete the pod Jul 19 11:32:05.999: INFO: Waiting for pod downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88 to disappear Jul 19 11:32:06.008: INFO: Pod downwardapi-volume-05086786-7168-4c10-a1a2-06408fda8d88 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:06.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6233" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:06.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 19 11:32:06.072: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 19 11:32:06.087: INFO: Waiting for terminating namespaces to be deleted... Jul 19 11:32:06.089: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 19 11:32:06.093: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:32:06.093: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:32:06.093: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:32:06.093: INFO: Container kindnet-cni ready: true, restart count 0 Jul 19 11:32:06.093: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 19 11:32:06.097: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:32:06.097: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:32:06.097: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:32:06.097: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jul 19 11:32:06.233: INFO: Pod kindnet-bqk7h requesting resource cpu=100m on Node jerma-worker Jul 19 11:32:06.233: INFO: Pod kindnet-klj8h requesting resource cpu=100m on Node jerma-worker2 Jul 19 11:32:06.233: INFO: Pod kube-proxy-2ssxj requesting resource cpu=0m on Node jerma-worker Jul 19 11:32:06.233: INFO: Pod kube-proxy-67jwf requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 19 11:32:06.233: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jul 19 11:32:06.241: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42.1623240bd6719274], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6169/filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42.1623240c6cb5c788], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42.1623240cd8b66cc4], Reason = [Created], Message = [Created container filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42] STEP: Considering event: Type = [Normal], Name = [filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42.1623240ce98d1f12], Reason = [Started], Message = [Started container filler-pod-371813f5-dff3-4999-ba6a-7856cbabac42] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb.1623240bd47a5e6c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6169/filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb.1623240c22ab6711], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb.1623240cab5196b3], Reason = [Created], Message = [Created container filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb.1623240cc63aa74e], Reason = [Started], Message = [Started container filler-pod-fbe661d5-b9d6-4aee-acc0-56ee687ab4fb] STEP: Considering event: Type = [Warning], Name = [additional-pod.1623240d9251f084], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:15.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6169" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.102 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":22,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:16.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 19 11:32:17.650: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:31.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8516" for this suite. • [SLOW TEST:15.344 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":23,"skipped":353,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:31.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:32:32.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:32:34.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:32:36.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755152, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:32:39.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:40.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5934" for this suite. STEP: Destroying namespace "webhook-5934-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.195 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":24,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:40.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:32:40.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3" in namespace "downward-api-3492" to be "success or failure" Jul 19 11:32:40.763: INFO: Pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.872666ms Jul 19 11:32:42.820: INFO: Pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063093358s Jul 19 11:32:44.999: INFO: Pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243020499s Jul 19 11:32:47.023: INFO: Pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.266988161s STEP: Saw pod success Jul 19 11:32:47.023: INFO: Pod "downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3" satisfied condition "success or failure" Jul 19 11:32:47.026: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3 container client-container: STEP: delete the pod Jul 19 11:32:47.169: INFO: Waiting for pod downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3 to disappear Jul 19 11:32:47.226: INFO: Pod downwardapi-volume-51ef28bc-4407-4037-a409-def45a7808a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:47.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3492" for this suite. • [SLOW TEST:6.577 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:47.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:32:48.136: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:32:50.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:32:52.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755168, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:32:55.389: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:32:56.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2283" for this suite. STEP: Destroying namespace "webhook-2283-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.224 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":26,"skipped":419,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:32:56.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:32:57.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:33:00.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:33:02.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:33:04.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755177, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:33:07.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:33:07.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8525" for this suite. STEP: Destroying namespace "webhook-8525-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":27,"skipped":420,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:33:07.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 19 11:33:07.746: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:07.777: INFO: Number of nodes with available pods: 0 Jul 19 11:33:07.777: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:08.827: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:08.873: INFO: Number of nodes with available pods: 0 Jul 19 11:33:08.873: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:10.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:10.222: INFO: Number of nodes with available pods: 0 Jul 19 11:33:10.222: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:10.893: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:10.897: INFO: Number of nodes with available pods: 0 Jul 19 11:33:10.897: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:11.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:11.976: INFO: Number of nodes with available pods: 0 Jul 19 11:33:11.976: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:12.989: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:12.999: INFO: Number of nodes with available pods: 0 Jul 19 11:33:12.999: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:13.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:14.043: INFO: Number of nodes with available pods: 0 Jul 19 11:33:14.043: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:15.193: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:15.239: INFO: Number of nodes with available pods: 1 Jul 19 11:33:15.239: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:33:16.241: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:16.680: INFO: Number of nodes with available pods: 2 Jul 19 11:33:16.680: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 19 11:33:17.696: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:17.700: INFO: Number of nodes with available pods: 1 Jul 19 11:33:17.700: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:18.786: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:18.790: INFO: Number of nodes with available pods: 1 Jul 19 11:33:18.790: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:19.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:19.714: INFO: Number of nodes with available pods: 1 Jul 19 11:33:19.714: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:20.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:20.708: INFO: Number of nodes with available pods: 1 Jul 19 11:33:20.708: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:21.738: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:21.741: INFO: Number of nodes with available pods: 1 Jul 19 11:33:21.741: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:22.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:22.710: INFO: Number of nodes with available pods: 1 Jul 19 11:33:22.710: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:23.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:23.707: INFO: Number of nodes with available pods: 1 Jul 19 11:33:23.707: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:24.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:24.709: INFO: Number of nodes with available pods: 1 Jul 19 11:33:24.709: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:25.731: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:25.735: INFO: Number of nodes with available pods: 1 Jul 19 11:33:25.735: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:26.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:26.707: INFO: Number of nodes with available pods: 1 Jul 19 11:33:26.707: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:27.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:27.844: INFO: Number of nodes with available pods: 1 Jul 19 11:33:27.844: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:28.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:28.706: INFO: Number of nodes with available pods: 1 Jul 19 11:33:28.706: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:29.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:29.708: INFO: Number of nodes with available pods: 1 Jul 19 11:33:29.708: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:30.755: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:30.757: INFO: Number of nodes with available pods: 1 Jul 19 11:33:30.758: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:33:31.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:33:31.707: INFO: Number of nodes with available pods: 2 Jul 19 11:33:31.707: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3628, will wait for the garbage collector to delete the pods Jul 19 11:33:31.767: INFO: Deleting DaemonSet.extensions daemon-set took: 5.93951ms Jul 19 11:33:32.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.280239ms Jul 19 11:33:47.571: INFO: Number of nodes with available pods: 0 Jul 19 11:33:47.571: INFO: Number of running nodes: 0, number of available pods: 0 Jul 19 11:33:47.574: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3628/daemonsets","resourceVersion":"2405589"},"items":null} Jul 19 11:33:47.577: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3628/pods","resourceVersion":"2405589"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:33:47.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3628" for this suite. • [SLOW TEST:40.032 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":28,"skipped":421,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:33:47.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0719 11:33:57.685742 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 11:33:57.685: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:33:57.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8050" for this suite. • [SLOW TEST:10.096 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":29,"skipped":422,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:33:57.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jul 19 11:33:57.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jul 19 11:33:58.078: INFO: stderr: "" Jul 19 11:33:58.078: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:33:58.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-169" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":30,"skipped":433,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:33:58.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-n9b8 STEP: Creating a pod to test atomic-volume-subpath Jul 19 11:33:58.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n9b8" in namespace "subpath-9159" to be "success or failure" Jul 19 11:33:58.259: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761499ms Jul 19 11:34:00.263: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021947108s Jul 19 11:34:02.312: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.071122643s Jul 19 11:34:04.316: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 6.075099007s Jul 19 11:34:06.320: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 8.079074503s Jul 19 11:34:08.324: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 10.083430881s Jul 19 11:34:10.328: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 12.086797644s Jul 19 11:34:12.331: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 14.090202827s Jul 19 11:34:14.335: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 16.094188209s Jul 19 11:34:16.339: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 18.097659007s Jul 19 11:34:18.343: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 20.101609952s Jul 19 11:34:20.360: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Running", Reason="", readiness=true. Elapsed: 22.119188687s Jul 19 11:34:22.365: INFO: Pod "pod-subpath-test-configmap-n9b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.123705336s STEP: Saw pod success Jul 19 11:34:22.365: INFO: Pod "pod-subpath-test-configmap-n9b8" satisfied condition "success or failure" Jul 19 11:34:22.367: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-n9b8 container test-container-subpath-configmap-n9b8: STEP: delete the pod Jul 19 11:34:22.592: INFO: Waiting for pod pod-subpath-test-configmap-n9b8 to disappear Jul 19 11:34:22.646: INFO: Pod pod-subpath-test-configmap-n9b8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-n9b8 Jul 19 11:34:22.646: INFO: Deleting pod "pod-subpath-test-configmap-n9b8" in namespace "subpath-9159" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:34:22.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9159" for this suite. • [SLOW TEST:24.577 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":31,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:34:22.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:34:23.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:34:25.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755264, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755264, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755264, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755263, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:34:28.885: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:34:29.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8525" for this suite. STEP: Destroying namespace "webhook-8525-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.343 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":32,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:34:30.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 19 11:34:30.042: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 19 11:34:30.054: INFO: Waiting for terminating namespaces to be deleted... Jul 19 11:34:30.056: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 19 11:34:30.088: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:34:30.088: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:34:30.088: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:34:30.088: INFO: Container kindnet-cni ready: true, restart count 0 Jul 19 11:34:30.088: INFO: sample-webhook-deployment-5f65f8c764-jfqgw from webhook-8525 started at 2020-07-19 11:34:24 +0000 UTC (1 container statuses recorded) Jul 19 11:34:30.088: INFO: Container sample-webhook ready: true, restart count 0 Jul 19 11:34:30.088: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 19 11:34:30.104: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:34:30.104: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:34:30.104: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:34:30.104: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-35c0acdf-a2cb-4270-b86c-af63f8a89e07 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-35c0acdf-a2cb-4270-b86c-af63f8a89e07 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-35c0acdf-a2cb-4270-b86c-af63f8a89e07 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:34:49.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5685" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:19.068 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":33,"skipped":507,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:34:49.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 19 11:34:49.272: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 19 11:34:49.297: INFO: Waiting for terminating namespaces to be deleted... Jul 19 11:34:49.325: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 19 11:34:49.331: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.331: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:34:49.331: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.331: INFO: Container kindnet-cni ready: true, restart count 0 Jul 19 11:34:49.331: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 19 11:34:49.336: INFO: pod1 from sched-pred-5685 started at 2020-07-19 11:34:34 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.336: INFO: Container pod1 ready: true, restart count 0 Jul 19 11:34:49.336: INFO: pod2 from sched-pred-5685 started at 2020-07-19 11:34:38 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.336: INFO: Container pod2 ready: true, restart count 0 Jul 19 11:34:49.336: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.336: INFO: Container kindnet-cni ready: true, restart count 0 Jul 19 11:34:49.336: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.336: INFO: Container kube-proxy ready: true, restart count 0 Jul 19 11:34:49.336: INFO: pod3 from sched-pred-5685 started at 2020-07-19 11:34:42 +0000 UTC (1 container statuses recorded) Jul 19 11:34:49.336: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0f1b4e27-c998-41b7-aa92-e664e18c6b78 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0f1b4e27-c998-41b7-aa92-e664e18c6b78 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0f1b4e27-c998-41b7-aa92-e664e18c6b78 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:34:59.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3836" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.515 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":34,"skipped":508,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:34:59.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jul 19 11:34:59.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1408' Jul 19 11:34:59.915: INFO: stderr: "" Jul 19 11:34:59.915: INFO: stdout: "pod/pause created\n" Jul 19 11:34:59.915: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 19 11:34:59.915: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1408" to be "running and ready" Jul 19 11:34:59.923: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.589666ms Jul 19 11:35:01.927: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011462308s Jul 19 11:35:03.931: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015465587s Jul 19 11:35:03.931: INFO: Pod "pause" satisfied condition "running and ready" Jul 19 11:35:03.931: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jul 19 11:35:03.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1408' Jul 19 11:35:04.028: INFO: stderr: "" Jul 19 11:35:04.028: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 19 11:35:04.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1408' Jul 19 11:35:04.110: INFO: stderr: "" Jul 19 11:35:04.110: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 19 11:35:04.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1408' Jul 19 11:35:04.208: INFO: stderr: "" Jul 19 11:35:04.208: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 19 11:35:04.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1408' Jul 19 11:35:04.299: INFO: stderr: "" Jul 19 11:35:04.299: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jul 19 11:35:04.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1408' Jul 19 11:35:04.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:35:04.561: INFO: stdout: "pod \"pause\" force deleted\n" Jul 19 11:35:04.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1408' Jul 19 11:35:05.151: INFO: stderr: "No resources found in kubectl-1408 namespace.\n" Jul 19 11:35:05.151: INFO: stdout: "" Jul 19 11:35:05.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1408 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 19 11:35:05.330: INFO: stderr: "" Jul 19 11:35:05.330: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1408" for this suite. • [SLOW TEST:5.746 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":35,"skipped":515,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:05.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 19 11:35:13.177: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:14.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5996" for this suite. • [SLOW TEST:8.922 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":36,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:14.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1934/configmap-test-eaf10767-926b-40dd-a6ad-23a30d49dc5b STEP: Creating a pod to test consume configMaps Jul 19 11:35:14.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf" in namespace "configmap-1934" to be "success or failure" Jul 19 11:35:14.469: INFO: Pod "pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf": Phase="Pending", Reason="", readiness=false. Elapsed: 44.491279ms Jul 19 11:35:16.487: INFO: Pod "pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062128621s Jul 19 11:35:18.490: INFO: Pod "pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065911619s STEP: Saw pod success Jul 19 11:35:18.490: INFO: Pod "pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf" satisfied condition "success or failure" Jul 19 11:35:18.493: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf container env-test: STEP: delete the pod Jul 19 11:35:18.524: INFO: Waiting for pod pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf to disappear Jul 19 11:35:18.560: INFO: Pod pod-configmaps-72532d05-e6b9-4efe-8e3d-2e47a2dc7faf no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:18.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1934" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":533,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:18.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 19 11:35:19.125: INFO: Waiting up to 5m0s for pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3" in namespace "emptydir-9560" to be "success or failure" Jul 19 11:35:19.165: INFO: Pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3": Phase="Pending", Reason="", readiness=false. Elapsed: 39.463973ms Jul 19 11:35:21.386: INFO: Pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260330411s Jul 19 11:35:23.517: INFO: Pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391661898s Jul 19 11:35:25.522: INFO: Pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39618536s STEP: Saw pod success Jul 19 11:35:25.522: INFO: Pod "pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3" satisfied condition "success or failure" Jul 19 11:35:25.571: INFO: Trying to get logs from node jerma-worker pod pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3 container test-container: STEP: delete the pod Jul 19 11:35:25.637: INFO: Waiting for pod pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3 to disappear Jul 19 11:35:25.648: INFO: Pod pod-55c94fae-e257-4a42-ba62-8ec5cb9f02b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:25.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9560" for this suite. • [SLOW TEST:7.088 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":536,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:25.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:35:26.236: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:35:28.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:35:30.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755326, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:35:33.393: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:35:33.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8879-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:34.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-58" for this suite. STEP: Destroying namespace "webhook-58-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.288 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":39,"skipped":542,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:34.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:35:35.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a" in namespace "projected-5546" to be "success or failure" Jul 19 11:35:35.068: INFO: Pod "downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a": Phase="Pending", Reason="", readiness=false. Elapsed: 63.653438ms Jul 19 11:35:37.072: INFO: Pod "downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067573681s Jul 19 11:35:39.077: INFO: Pod "downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071999571s STEP: Saw pod success Jul 19 11:35:39.077: INFO: Pod "downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a" satisfied condition "success or failure" Jul 19 11:35:39.080: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a container client-container: STEP: delete the pod Jul 19 11:35:39.099: INFO: Waiting for pod downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a to disappear Jul 19 11:35:39.152: INFO: Pod downwardapi-volume-bf7aa28b-439e-4a63-8125-7b3fd260471a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:35:39.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5546" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":544,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:35:39.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:36:39.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7011" for this suite. • [SLOW TEST:60.390 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":547,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:36:39.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:36:52.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5074" for this suite. • [SLOW TEST:12.689 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":42,"skipped":553,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:36:52.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:36:52.786: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:36:53.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4225" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":43,"skipped":560,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:36:54.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4bc20881-7627-42a8-a3f5-42ce95b8b184 STEP: Creating a pod to test consume configMaps Jul 19 11:36:56.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6" in namespace "configmap-8711" to be "success or failure" Jul 19 11:36:57.108: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 319.895027ms Jul 19 11:36:59.117: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329118856s Jul 19 11:37:01.148: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360346945s Jul 19 11:37:03.333: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.54564215s Jul 19 11:37:05.365: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Running", Reason="", readiness=true. Elapsed: 8.577247997s Jul 19 11:37:07.371: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582844872s STEP: Saw pod success Jul 19 11:37:07.371: INFO: Pod "pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6" satisfied condition "success or failure" Jul 19 11:37:07.373: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6 container configmap-volume-test: STEP: delete the pod Jul 19 11:37:07.477: INFO: Waiting for pod pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6 to disappear Jul 19 11:37:07.497: INFO: Pod pod-configmaps-37e8f7e9-5579-48e8-bbe5-d68e66903ba6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:37:07.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8711" for this suite. • [SLOW TEST:12.759 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:37:07.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-kr26 STEP: Creating a pod to test atomic-volume-subpath Jul 19 11:37:08.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kr26" in namespace "subpath-7828" to be "success or failure" Jul 19 11:37:08.730: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Pending", Reason="", readiness=false. Elapsed: 302.319373ms Jul 19 11:37:10.884: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456137246s Jul 19 11:37:13.028: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600635841s Jul 19 11:37:15.032: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 6.604390709s Jul 19 11:37:17.036: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 8.608451379s Jul 19 11:37:19.093: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 10.665635812s Jul 19 11:37:21.202: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 12.774024527s Jul 19 11:37:23.205: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 14.777675147s Jul 19 11:37:25.209: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 16.781516338s Jul 19 11:37:27.213: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 18.785369668s Jul 19 11:37:29.217: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 20.789446349s Jul 19 11:37:31.221: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 22.792866922s Jul 19 11:37:33.225: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Running", Reason="", readiness=true. Elapsed: 24.797104635s Jul 19 11:37:35.229: INFO: Pod "pod-subpath-test-secret-kr26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.801215147s STEP: Saw pod success Jul 19 11:37:35.229: INFO: Pod "pod-subpath-test-secret-kr26" satisfied condition "success or failure" Jul 19 11:37:35.232: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-kr26 container test-container-subpath-secret-kr26: STEP: delete the pod Jul 19 11:37:35.273: INFO: Waiting for pod pod-subpath-test-secret-kr26 to disappear Jul 19 11:37:35.288: INFO: Pod pod-subpath-test-secret-kr26 no longer exists STEP: Deleting pod pod-subpath-test-secret-kr26 Jul 19 11:37:35.288: INFO: Deleting pod "pod-subpath-test-secret-kr26" in namespace "subpath-7828" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:37:35.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7828" for this suite. • [SLOW TEST:27.708 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":45,"skipped":594,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:37:35.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 19 11:37:35.391: INFO: >>> kubeConfig: /root/.kube/config Jul 19 11:37:38.327: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:37:48.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6027" for this suite. • [SLOW TEST:13.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":46,"skipped":595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:37:48.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:37:49.113: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 19 11:37:49.124: INFO: Number of nodes with available pods: 0 Jul 19 11:37:49.124: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 19 11:37:49.231: INFO: Number of nodes with available pods: 0 Jul 19 11:37:49.231: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:50.235: INFO: Number of nodes with available pods: 0 Jul 19 11:37:50.235: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:51.531: INFO: Number of nodes with available pods: 0 Jul 19 11:37:51.531: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:52.235: INFO: Number of nodes with available pods: 0 Jul 19 11:37:52.235: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:53.235: INFO: Number of nodes with available pods: 0 Jul 19 11:37:53.235: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:54.235: INFO: Number of nodes with available pods: 1 Jul 19 11:37:54.235: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 19 11:37:54.278: INFO: Number of nodes with available pods: 1 Jul 19 11:37:54.279: INFO: Number of running nodes: 0, number of available pods: 1 Jul 19 11:37:55.282: INFO: Number of nodes with available pods: 0 Jul 19 11:37:55.282: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 19 11:37:55.298: INFO: Number of nodes with available pods: 0 Jul 19 11:37:55.298: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:56.302: INFO: Number of nodes with available pods: 0 Jul 19 11:37:56.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:57.302: INFO: Number of nodes with available pods: 0 Jul 19 11:37:57.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:58.302: INFO: Number of nodes with available pods: 0 Jul 19 11:37:58.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:37:59.302: INFO: Number of nodes with available pods: 0 Jul 19 11:37:59.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:00.316: INFO: Number of nodes with available pods: 0 Jul 19 11:38:00.316: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:01.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:01.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:02.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:02.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:03.301: INFO: Number of nodes with available pods: 0 Jul 19 11:38:03.301: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:04.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:04.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:05.301: INFO: Number of nodes with available pods: 0 Jul 19 11:38:05.301: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:06.301: INFO: Number of nodes with available pods: 0 Jul 19 11:38:06.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:07.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:07.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:08.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:08.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:09.628: INFO: Number of nodes with available pods: 0 Jul 19 11:38:09.628: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:10.302: INFO: Number of nodes with available pods: 0 Jul 19 11:38:10.302: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:38:11.302: INFO: Number of nodes with available pods: 1 Jul 19 11:38:11.303: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5212, will wait for the garbage collector to delete the pods Jul 19 11:38:11.368: INFO: Deleting DaemonSet.extensions daemon-set took: 6.396491ms Jul 19 11:38:12.469: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.100303422s Jul 19 11:38:18.126: INFO: Number of nodes with available pods: 0 Jul 19 11:38:18.126: INFO: Number of running nodes: 0, number of available pods: 0 Jul 19 11:38:18.128: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5212/daemonsets","resourceVersion":"2407288"},"items":null} Jul 19 11:38:18.133: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5212/pods","resourceVersion":"2407288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:38:18.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5212" for this suite. • [SLOW TEST:29.567 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":47,"skipped":696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:38:18.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 19 11:38:18.966: INFO: Waiting up to 5m0s for pod "downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028" in namespace "downward-api-4172" to be "success or failure" Jul 19 11:38:18.975: INFO: Pod "downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028": Phase="Pending", Reason="", readiness=false. Elapsed: 9.30561ms Jul 19 11:38:20.980: INFO: Pod "downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013952905s Jul 19 11:38:22.983: INFO: Pod "downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017299596s STEP: Saw pod success Jul 19 11:38:22.983: INFO: Pod "downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028" satisfied condition "success or failure" Jul 19 11:38:22.985: INFO: Trying to get logs from node jerma-worker pod downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028 container dapi-container: STEP: delete the pod Jul 19 11:38:23.054: INFO: Waiting for pod downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028 to disappear Jul 19 11:38:23.377: INFO: Pod downward-api-3a0866e6-6fbb-41b8-8e8b-b4bb23c45028 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:38:23.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4172" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:38:23.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:38:27.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2019" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:38:27.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:38:29.311: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:38:31.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:38:33.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755509, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:38:36.910: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:38:36.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2603-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:38:42.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-545" for this suite. STEP: Destroying namespace "webhook-545-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.904 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":50,"skipped":809,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:38:43.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 19 11:38:45.093: INFO: Waiting up to 5m0s for pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855" in namespace "downward-api-3201" to be "success or failure" Jul 19 11:38:45.297: INFO: Pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855": Phase="Pending", Reason="", readiness=false. Elapsed: 203.81551ms Jul 19 11:38:47.303: INFO: Pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209346583s Jul 19 11:38:49.490: INFO: Pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396643172s Jul 19 11:38:51.494: INFO: Pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.400351945s STEP: Saw pod success Jul 19 11:38:51.494: INFO: Pod "downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855" satisfied condition "success or failure" Jul 19 11:38:51.496: INFO: Trying to get logs from node jerma-worker pod downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855 container dapi-container: STEP: delete the pod Jul 19 11:38:52.313: INFO: Waiting for pod downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855 to disappear Jul 19 11:38:52.348: INFO: Pod downward-api-64986a48-60ef-4550-bb7b-46d16c0b3855 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:38:52.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3201" for this suite. • [SLOW TEST:8.593 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":810,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:38:52.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8482/secret-test-3729cbcc-9baf-4d92-9dbe-fd8ea7752111 STEP: Creating a pod to test consume secrets Jul 19 11:38:53.450: INFO: Waiting up to 5m0s for pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540" in namespace "secrets-8482" to be "success or failure" Jul 19 11:38:53.963: INFO: Pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540": Phase="Pending", Reason="", readiness=false. Elapsed: 513.484168ms Jul 19 11:38:56.186: INFO: Pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736453041s Jul 19 11:38:58.190: INFO: Pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540": Phase="Running", Reason="", readiness=true. Elapsed: 4.739769325s Jul 19 11:39:00.193: INFO: Pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.74347962s STEP: Saw pod success Jul 19 11:39:00.193: INFO: Pod "pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540" satisfied condition "success or failure" Jul 19 11:39:00.196: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540 container env-test: STEP: delete the pod Jul 19 11:39:00.367: INFO: Waiting for pod pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540 to disappear Jul 19 11:39:00.526: INFO: Pod pod-configmaps-c91342e5-0d1b-4d3c-958e-902b574a2540 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:00.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8482" for this suite. • [SLOW TEST:8.723 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":812,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:01.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ceeed070-bdef-4629-9df3-0f7ff74a0e88 STEP: Creating a pod to test consume configMaps Jul 19 11:39:01.779: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9" in namespace "projected-5246" to be "success or failure" Jul 19 11:39:01.806: INFO: Pod "pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.513885ms Jul 19 11:39:03.861: INFO: Pod "pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082502434s Jul 19 11:39:05.916: INFO: Pod "pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136738965s STEP: Saw pod success Jul 19 11:39:05.916: INFO: Pod "pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9" satisfied condition "success or failure" Jul 19 11:39:05.919: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9 container projected-configmap-volume-test: STEP: delete the pod Jul 19 11:39:05.952: INFO: Waiting for pod pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9 to disappear Jul 19 11:39:05.968: INFO: Pod pod-projected-configmaps-8fe7bb6a-9202-483a-be8a-902b28248fc9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:05.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5246" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":832,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:05.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 19 11:39:06.395: INFO: Waiting up to 5m0s for pod "pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d" in namespace "emptydir-2275" to be "success or failure" Jul 19 11:39:06.592: INFO: Pod "pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 197.278158ms Jul 19 11:39:08.622: INFO: Pod "pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227484178s Jul 19 11:39:10.626: INFO: Pod "pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230760964s STEP: Saw pod success Jul 19 11:39:10.626: INFO: Pod "pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d" satisfied condition "success or failure" Jul 19 11:39:10.629: INFO: Trying to get logs from node jerma-worker pod pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d container test-container: STEP: delete the pod Jul 19 11:39:10.700: INFO: Waiting for pod pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d to disappear Jul 19 11:39:10.725: INFO: Pod pod-41d1aed0-227f-40af-9d4e-a86746bd7a1d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:10.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2275" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":838,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:10.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b9ae44d3-d43e-4ffa-8007-c101747863c6 STEP: Creating a pod to test consume configMaps Jul 19 11:39:11.297: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d" in namespace "projected-7389" to be "success or failure" Jul 19 11:39:11.389: INFO: Pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 92.729394ms Jul 19 11:39:13.393: INFO: Pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096052856s Jul 19 11:39:15.496: INFO: Pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199372011s Jul 19 11:39:17.500: INFO: Pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203623379s STEP: Saw pod success Jul 19 11:39:17.500: INFO: Pod "pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d" satisfied condition "success or failure" Jul 19 11:39:17.503: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d container projected-configmap-volume-test: STEP: delete the pod Jul 19 11:39:17.636: INFO: Waiting for pod pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d to disappear Jul 19 11:39:17.651: INFO: Pod pod-projected-configmaps-36f81e09-1a7d-4dbe-a472-47ef567c3e7d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:17.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7389" for this suite. • [SLOW TEST:6.980 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":841,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:17.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jul 19 11:39:17.995: INFO: Waiting up to 5m0s for pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af" in namespace "var-expansion-2911" to be "success or failure" Jul 19 11:39:18.187: INFO: Pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af": Phase="Pending", Reason="", readiness=false. Elapsed: 192.126492ms Jul 19 11:39:20.368: INFO: Pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373166795s Jul 19 11:39:22.771: INFO: Pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.776348354s Jul 19 11:39:24.826: INFO: Pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.830933959s STEP: Saw pod success Jul 19 11:39:24.826: INFO: Pod "var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af" satisfied condition "success or failure" Jul 19 11:39:24.829: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af container dapi-container: STEP: delete the pod Jul 19 11:39:25.222: INFO: Waiting for pod var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af to disappear Jul 19 11:39:25.224: INFO: Pod var-expansion-73ffed03-f093-4a32-ac37-b0461c6945af no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:25.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2911" for this suite. • [SLOW TEST:7.519 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":853,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:25.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:39:25.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915" in namespace "projected-3901" to be "success or failure" Jul 19 11:39:25.630: INFO: Pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915": Phase="Pending", Reason="", readiness=false. Elapsed: 35.06758ms Jul 19 11:39:27.634: INFO: Pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038727391s Jul 19 11:39:29.638: INFO: Pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042668622s Jul 19 11:39:31.988: INFO: Pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39260778s STEP: Saw pod success Jul 19 11:39:31.988: INFO: Pod "downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915" satisfied condition "success or failure" Jul 19 11:39:31.991: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915 container client-container: STEP: delete the pod Jul 19 11:39:32.217: INFO: Waiting for pod downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915 to disappear Jul 19 11:39:32.260: INFO: Pod downwardapi-volume-4785af99-dedf-4826-ab59-c2f1b39ff915 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:32.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3901" for this suite. • [SLOW TEST:7.034 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":864,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:32.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jul 19 11:39:33.303: INFO: Created pod &Pod{ObjectMeta:{dns-5578 dns-5578 /api/v1/namespaces/dns-5578/pods/dns-5578 551abd7a-cb29-4315-ba9c-0eadc14af428 2408004 0 2020-07-19 11:39:33 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m487m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m487m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m487m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jul 19 11:39:41.609: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5578 PodName:dns-5578 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 11:39:41.609: INFO: >>> kubeConfig: /root/.kube/config I0719 11:39:41.645395 6 log.go:172] (0xc000b5c790) (0xc00239b360) Create stream I0719 11:39:41.645430 6 log.go:172] (0xc000b5c790) (0xc00239b360) Stream added, broadcasting: 1 I0719 11:39:41.648286 6 log.go:172] (0xc000b5c790) Reply frame received for 1 I0719 11:39:41.648325 6 log.go:172] (0xc000b5c790) (0xc00239b400) Create stream I0719 11:39:41.648340 6 log.go:172] (0xc000b5c790) (0xc00239b400) Stream added, broadcasting: 3 I0719 11:39:41.652512 6 log.go:172] (0xc000b5c790) Reply frame received for 3 I0719 11:39:41.652559 6 log.go:172] (0xc000b5c790) (0xc00239b4a0) Create stream I0719 11:39:41.652582 6 log.go:172] (0xc000b5c790) (0xc00239b4a0) Stream added, broadcasting: 5 I0719 11:39:41.654133 6 log.go:172] (0xc000b5c790) Reply frame received for 5 I0719 11:39:41.751257 6 log.go:172] (0xc000b5c790) Data frame received for 3 I0719 11:39:41.751294 6 log.go:172] (0xc00239b400) (3) Data frame handling I0719 11:39:41.751316 6 log.go:172] (0xc00239b400) (3) Data frame sent I0719 11:39:41.752013 6 log.go:172] (0xc000b5c790) Data frame received for 3 I0719 11:39:41.752051 6 log.go:172] (0xc00239b400) (3) Data frame handling I0719 11:39:41.752125 6 log.go:172] (0xc000b5c790) Data frame received for 5 I0719 11:39:41.752152 6 log.go:172] (0xc00239b4a0) (5) Data frame handling I0719 11:39:41.753780 6 log.go:172] (0xc000b5c790) Data frame received for 1 I0719 11:39:41.753807 6 log.go:172] (0xc00239b360) (1) Data frame handling I0719 11:39:41.753823 6 log.go:172] (0xc00239b360) (1) Data frame sent I0719 11:39:41.754014 6 log.go:172] (0xc000b5c790) (0xc00239b360) Stream removed, broadcasting: 1 I0719 11:39:41.754059 6 log.go:172] (0xc000b5c790) Go away received I0719 11:39:41.754231 6 log.go:172] (0xc000b5c790) (0xc00239b360) Stream removed, broadcasting: 1 I0719 11:39:41.754256 6 log.go:172] (0xc000b5c790) (0xc00239b400) Stream removed, broadcasting: 3 I0719 11:39:41.754270 6 log.go:172] (0xc000b5c790) (0xc00239b4a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jul 19 11:39:41.754: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5578 PodName:dns-5578 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 11:39:41.754: INFO: >>> kubeConfig: /root/.kube/config I0719 11:39:42.211975 6 log.go:172] (0xc0029d5970) (0xc002968fa0) Create stream I0719 11:39:42.212018 6 log.go:172] (0xc0029d5970) (0xc002968fa0) Stream added, broadcasting: 1 I0719 11:39:42.214017 6 log.go:172] (0xc0029d5970) Reply frame received for 1 I0719 11:39:42.214072 6 log.go:172] (0xc0029d5970) (0xc001f90aa0) Create stream I0719 11:39:42.214088 6 log.go:172] (0xc0029d5970) (0xc001f90aa0) Stream added, broadcasting: 3 I0719 11:39:42.214789 6 log.go:172] (0xc0029d5970) Reply frame received for 3 I0719 11:39:42.214855 6 log.go:172] (0xc0029d5970) (0xc00239b540) Create stream I0719 11:39:42.214867 6 log.go:172] (0xc0029d5970) (0xc00239b540) Stream added, broadcasting: 5 I0719 11:39:42.215524 6 log.go:172] (0xc0029d5970) Reply frame received for 5 I0719 11:39:42.284705 6 log.go:172] (0xc0029d5970) Data frame received for 3 I0719 11:39:42.284830 6 log.go:172] (0xc001f90aa0) (3) Data frame handling I0719 11:39:42.284850 6 log.go:172] (0xc001f90aa0) (3) Data frame sent I0719 11:39:42.285483 6 log.go:172] (0xc0029d5970) Data frame received for 3 I0719 11:39:42.285495 6 log.go:172] (0xc001f90aa0) (3) Data frame handling I0719 11:39:42.285584 6 log.go:172] (0xc0029d5970) Data frame received for 5 I0719 11:39:42.285603 6 log.go:172] (0xc00239b540) (5) Data frame handling I0719 11:39:42.286986 6 log.go:172] (0xc0029d5970) Data frame received for 1 I0719 11:39:42.287004 6 log.go:172] (0xc002968fa0) (1) Data frame handling I0719 11:39:42.287014 6 log.go:172] (0xc002968fa0) (1) Data frame sent I0719 11:39:42.287026 6 log.go:172] (0xc0029d5970) (0xc002968fa0) Stream removed, broadcasting: 1 I0719 11:39:42.287062 6 log.go:172] (0xc0029d5970) Go away received I0719 11:39:42.287213 6 log.go:172] (0xc0029d5970) (0xc002968fa0) Stream removed, broadcasting: 1 I0719 11:39:42.287237 6 log.go:172] (0xc0029d5970) (0xc001f90aa0) Stream removed, broadcasting: 3 I0719 11:39:42.287244 6 log.go:172] (0xc0029d5970) (0xc00239b540) Stream removed, broadcasting: 5 Jul 19 11:39:42.287: INFO: Deleting pod dns-5578... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:42.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5578" for this suite. • [SLOW TEST:10.274 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":58,"skipped":869,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:42.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jul 19 11:39:42.933: INFO: Waiting up to 5m0s for pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79" in namespace "var-expansion-3123" to be "success or failure" Jul 19 11:39:42.952: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79": Phase="Pending", Reason="", readiness=false. Elapsed: 19.16517ms Jul 19 11:39:45.204: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270960454s Jul 19 11:39:47.207: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273841036s Jul 19 11:39:49.253: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79": Phase="Running", Reason="", readiness=true. Elapsed: 6.319960191s Jul 19 11:39:51.255: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.322713497s STEP: Saw pod success Jul 19 11:39:51.255: INFO: Pod "var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79" satisfied condition "success or failure" Jul 19 11:39:51.261: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79 container dapi-container: STEP: delete the pod Jul 19 11:39:51.989: INFO: Waiting for pod var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79 to disappear Jul 19 11:39:52.208: INFO: Pod var-expansion-11c4250c-f3f1-4d1e-8ce8-e1228168db79 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:39:52.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3123" for this suite. • [SLOW TEST:9.674 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":878,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:39:52.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 19 11:39:52.528: INFO: Waiting up to 5m0s for pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85" in namespace "emptydir-5394" to be "success or failure" Jul 19 11:39:52.572: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Pending", Reason="", readiness=false. Elapsed: 43.523273ms Jul 19 11:39:54.605: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076723578s Jul 19 11:39:57.024: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495906177s Jul 19 11:39:59.028: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.500059979s Jul 19 11:40:01.032: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503965136s Jul 19 11:40:03.051: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Running", Reason="", readiness=true. Elapsed: 10.522955981s Jul 19 11:40:05.114: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.5858418s STEP: Saw pod success Jul 19 11:40:05.114: INFO: Pod "pod-29d51629-26b6-496b-8ac0-fa0f83032a85" satisfied condition "success or failure" Jul 19 11:40:05.159: INFO: Trying to get logs from node jerma-worker pod pod-29d51629-26b6-496b-8ac0-fa0f83032a85 container test-container: STEP: delete the pod Jul 19 11:40:05.280: INFO: Waiting for pod pod-29d51629-26b6-496b-8ac0-fa0f83032a85 to disappear Jul 19 11:40:05.290: INFO: Pod pod-29d51629-26b6-496b-8ac0-fa0f83032a85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:40:05.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5394" for this suite. • [SLOW TEST:13.081 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":884,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:40:05.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:40:05.412: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:40:06.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8107" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":61,"skipped":896,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:40:06.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:40:06.756: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 19 11:40:08.885: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:40:09.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5183" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":62,"skipped":902,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:40:10.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:40:10.598: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jul 19 11:40:13.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 create -f -' Jul 19 11:40:39.010: INFO: stderr: "" Jul 19 11:40:39.010: INFO: stdout: "e2e-test-crd-publish-openapi-2745-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 19 11:40:39.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 delete e2e-test-crd-publish-openapi-2745-crds test-foo' Jul 19 11:40:39.442: INFO: stderr: "" Jul 19 11:40:39.443: INFO: stdout: "e2e-test-crd-publish-openapi-2745-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jul 19 11:40:39.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 apply -f -' Jul 19 11:40:39.746: INFO: stderr: "" Jul 19 11:40:39.746: INFO: stdout: "e2e-test-crd-publish-openapi-2745-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 19 11:40:39.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 delete e2e-test-crd-publish-openapi-2745-crds test-foo' Jul 19 11:40:39.890: INFO: stderr: "" Jul 19 11:40:39.890: INFO: stdout: "e2e-test-crd-publish-openapi-2745-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jul 19 11:40:39.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 create -f -' Jul 19 11:40:41.410: INFO: rc: 1 Jul 19 11:40:41.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 apply -f -' Jul 19 11:40:44.087: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jul 19 11:40:44.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 create -f -' Jul 19 11:40:44.426: INFO: rc: 1 Jul 19 11:40:44.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4113 apply -f -' Jul 19 11:40:44.692: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jul 19 11:40:44.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2745-crds' Jul 19 11:40:44.949: INFO: stderr: "" Jul 19 11:40:44.949: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2745-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jul 19 11:40:44.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2745-crds.metadata' Jul 19 11:40:45.195: INFO: stderr: "" Jul 19 11:40:45.195: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2745-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jul 19 11:40:45.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2745-crds.spec' Jul 19 11:40:45.410: INFO: stderr: "" Jul 19 11:40:45.410: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2745-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jul 19 11:40:45.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2745-crds.spec.bars' Jul 19 11:40:49.350: INFO: stderr: "" Jul 19 11:40:49.350: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2745-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jul 19 11:40:49.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2745-crds.spec.bars2' Jul 19 11:40:51.261: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:40:54.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4113" for this suite. • [SLOW TEST:44.901 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":63,"skipped":909,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:40:54.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:40:55.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60" in namespace "projected-7494" to be "success or failure" Jul 19 11:40:55.618: INFO: Pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60": Phase="Pending", Reason="", readiness=false. Elapsed: 171.747884ms Jul 19 11:40:58.017: INFO: Pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.570863051s Jul 19 11:41:00.145: INFO: Pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.699305796s Jul 19 11:41:02.149: INFO: Pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.702779684s STEP: Saw pod success Jul 19 11:41:02.149: INFO: Pod "downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60" satisfied condition "success or failure" Jul 19 11:41:02.151: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60 container client-container: STEP: delete the pod Jul 19 11:41:02.164: INFO: Waiting for pod downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60 to disappear Jul 19 11:41:02.234: INFO: Pod downwardapi-volume-0ea4b40c-162b-4e94-a177-5a5389e69d60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:41:02.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7494" for this suite. • [SLOW TEST:7.303 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":921,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:41:02.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-28360660-9429-423e-aa28-434e35a37609 STEP: Creating configMap with name cm-test-opt-upd-af91b000-ff46-406a-a1c8-45eaa6fae609 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-28360660-9429-423e-aa28-434e35a37609 STEP: Updating configmap cm-test-opt-upd-af91b000-ff46-406a-a1c8-45eaa6fae609 STEP: Creating configMap with name cm-test-opt-create-b161e496-02e3-4f88-ba2b-08c2f7b68e04 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:42:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7746" for this suite. • [SLOW TEST:81.036 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":928,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:42:23.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 19 11:42:27.118: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:42:27.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4350" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":928,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:42:27.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1626 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jul 19 11:42:27.484: INFO: Found 0 stateful pods, waiting for 3 Jul 19 11:42:37.513: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:42:37.513: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:42:37.513: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 19 11:42:47.488: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:42:47.488: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:42:47.488: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 19 11:42:47.514: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 19 11:42:57.702: INFO: Updating stateful set ss2 Jul 19 11:42:57.869: INFO: Waiting for Pod statefulset-1626/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jul 19 11:43:08.676: INFO: Found 2 stateful pods, waiting for 3 Jul 19 11:43:18.681: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:43:18.681: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:43:18.681: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 19 11:43:29.208: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:43:29.208: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 11:43:29.208: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 19 11:43:29.297: INFO: Updating stateful set ss2 Jul 19 11:43:29.422: INFO: Waiting for Pod statefulset-1626/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 19 11:43:39.447: INFO: Updating stateful set ss2 Jul 19 11:43:39.805: INFO: Waiting for StatefulSet statefulset-1626/ss2 to complete update Jul 19 11:43:39.805: INFO: Waiting for Pod statefulset-1626/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 19 11:43:49.870: INFO: Waiting for StatefulSet statefulset-1626/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 19 11:43:59.812: INFO: Deleting all statefulset in ns statefulset-1626 Jul 19 11:43:59.815: INFO: Scaling statefulset ss2 to 0 Jul 19 11:44:19.832: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 11:44:19.835: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:44:20.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1626" for this suite. • [SLOW TEST:112.919 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":67,"skipped":939,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:44:20.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 19 11:44:20.719: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jul 19 11:44:21.262: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 19 11:44:25.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:44:28.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:44:29.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:44:31.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:44:33.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755861, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:44:37.413: INFO: Waited 1.042851578s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:44:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5702" for this suite. • [SLOW TEST:23.450 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":68,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:44:43.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jul 19 11:44:43.682: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jul 19 11:44:43.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:44.231: INFO: stderr: "" Jul 19 11:44:44.231: INFO: stdout: "service/agnhost-slave created\n" Jul 19 11:44:44.232: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jul 19 11:44:44.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:44.662: INFO: stderr: "" Jul 19 11:44:44.662: INFO: stdout: "service/agnhost-master created\n" Jul 19 11:44:44.662: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 19 11:44:44.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:45.174: INFO: stderr: "" Jul 19 11:44:45.174: INFO: stdout: "service/frontend created\n" Jul 19 11:44:45.174: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jul 19 11:44:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:47.671: INFO: stderr: "" Jul 19 11:44:47.671: INFO: stdout: "deployment.apps/frontend created\n" Jul 19 11:44:47.671: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 19 11:44:47.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:48.299: INFO: stderr: "" Jul 19 11:44:48.299: INFO: stdout: "deployment.apps/agnhost-master created\n" Jul 19 11:44:48.299: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 19 11:44:48.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8920' Jul 19 11:44:48.646: INFO: stderr: "" Jul 19 11:44:48.646: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jul 19 11:44:48.646: INFO: Waiting for all frontend pods to be Running. Jul 19 11:44:58.697: INFO: Waiting for frontend to serve content. Jul 19 11:44:58.706: INFO: Trying to add a new entry to the guestbook. Jul 19 11:44:58.715: INFO: Verifying that added entry can be retrieved. Jul 19 11:44:58.720: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Jul 19 11:45:03.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:03.972: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:03.973: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jul 19 11:45:03.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:04.193: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:04.193: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 19 11:45:04.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:04.362: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:04.362: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 19 11:45:04.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:04.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:04.467: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 19 11:45:04.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:04.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:04.591: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 19 11:45:04.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8920' Jul 19 11:45:04.724: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:45:04.724: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:45:04.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8920" for this suite. • [SLOW TEST:21.213 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":69,"skipped":983,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:45:04.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jul 19 11:45:05.026: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jul 19 11:45:16.689: INFO: >>> kubeConfig: /root/.kube/config Jul 19 11:45:19.771: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:45:30.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6219" for this suite. • [SLOW TEST:25.677 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":70,"skipped":988,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:45:30.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:45:30.695: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:45:34.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8372" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1007,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:45:34.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 19 11:45:34.978: INFO: Waiting up to 5m0s for pod "pod-b67788e1-f4c7-4902-8864-f20a0af814b8" in namespace "emptydir-9040" to be "success or failure" Jul 19 11:45:34.982: INFO: Pod "pod-b67788e1-f4c7-4902-8864-f20a0af814b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681431ms Jul 19 11:45:36.986: INFO: Pod "pod-b67788e1-f4c7-4902-8864-f20a0af814b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007395506s Jul 19 11:45:38.990: INFO: Pod "pod-b67788e1-f4c7-4902-8864-f20a0af814b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011315226s STEP: Saw pod success Jul 19 11:45:38.990: INFO: Pod "pod-b67788e1-f4c7-4902-8864-f20a0af814b8" satisfied condition "success or failure" Jul 19 11:45:38.993: INFO: Trying to get logs from node jerma-worker pod pod-b67788e1-f4c7-4902-8864-f20a0af814b8 container test-container: STEP: delete the pod Jul 19 11:45:39.048: INFO: Waiting for pod pod-b67788e1-f4c7-4902-8864-f20a0af814b8 to disappear Jul 19 11:45:39.050: INFO: Pod pod-b67788e1-f4c7-4902-8864-f20a0af814b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:45:39.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9040" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:45:39.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:46:10.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-108" for this suite. STEP: Destroying namespace "nsdeletetest-4470" for this suite. Jul 19 11:46:10.670: INFO: Namespace nsdeletetest-4470 was already deleted STEP: Destroying namespace "nsdeletetest-4993" for this suite. • [SLOW TEST:31.618 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":73,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:46:10.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:46:11.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d" in namespace "downward-api-2995" to be "success or failure" Jul 19 11:46:11.166: INFO: Pod "downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 125.243519ms Jul 19 11:46:13.170: INFO: Pod "downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129525329s Jul 19 11:46:15.174: INFO: Pod "downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133886118s STEP: Saw pod success Jul 19 11:46:15.175: INFO: Pod "downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d" satisfied condition "success or failure" Jul 19 11:46:15.178: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d container client-container: STEP: delete the pod Jul 19 11:46:15.247: INFO: Waiting for pod downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d to disappear Jul 19 11:46:15.259: INFO: Pod downwardapi-volume-8c5e348d-29c0-4ad9-a402-cea531130d3d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:46:15.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2995" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1073,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:46:15.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:46:15.365: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 19 11:46:20.368: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 19 11:46:20.368: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 19 11:46:22.372: INFO: Creating deployment "test-rollover-deployment" Jul 19 11:46:22.387: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 19 11:46:24.473: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 19 11:46:24.731: INFO: Ensure that both replica sets have 1 created replica Jul 19 11:46:24.736: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 19 11:46:24.742: INFO: Updating deployment test-rollover-deployment Jul 19 11:46:24.742: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 19 11:46:26.754: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 19 11:46:26.759: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 19 11:46:26.764: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:26.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755985, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:28.771: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:28.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:30.772: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:30.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:32.773: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:32.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:34.770: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:34.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:36.771: INFO: all replica sets need to contain the pod-template-hash label Jul 19 11:46:36.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:38.866: INFO: Jul 19 11:46:38.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730755982, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:46:40.771: INFO: Jul 19 11:46:40.771: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 19 11:46:40.779: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8749 /apis/apps/v1/namespaces/deployment-8749/deployments/test-rollover-deployment fb340b3b-f9c7-4b25-a926-e4b58e81e180 2410802 2 2020-07-19 11:46:22 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002400458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-19 11:46:22 +0000 UTC,LastTransitionTime:2020-07-19 11:46:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-07-19 11:46:39 +0000 UTC,LastTransitionTime:2020-07-19 11:46:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 19 11:46:40.782: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-8749 /apis/apps/v1/namespaces/deployment-8749/replicasets/test-rollover-deployment-574d6dfbff d3432f7f-5de8-4d81-811e-60de560376d0 2410791 2 2020-07-19 11:46:24 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fb340b3b-f9c7-4b25-a926-e4b58e81e180 0xc002400cd7 0xc002400cd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002400db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:46:40.782: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 19 11:46:40.782: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8749 /apis/apps/v1/namespaces/deployment-8749/replicasets/test-rollover-controller 3404e2fa-59d2-48fc-976d-481ae1d23612 2410800 2 2020-07-19 11:46:15 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fb340b3b-f9c7-4b25-a926-e4b58e81e180 0xc002400b47 0xc002400b48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002400c28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:46:40.782: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8749 /apis/apps/v1/namespaces/deployment-8749/replicasets/test-rollover-deployment-f6c94f66c 7b72f7b1-7771-46fa-b111-0466fdac75cc 2410737 2 2020-07-19 11:46:22 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fb340b3b-f9c7-4b25-a926-e4b58e81e180 0xc002400e70 0xc002400e71}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002400f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 19 11:46:40.785: INFO: Pod "test-rollover-deployment-574d6dfbff-28wt7" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-28wt7 test-rollover-deployment-574d6dfbff- deployment-8749 /api/v1/namespaces/deployment-8749/pods/test-rollover-deployment-574d6dfbff-28wt7 fd29fb31-2f87-46a6-a500-e462216d0bcc 2410756 0 2020-07-19 11:46:24 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d3432f7f-5de8-4d81-811e-60de560376d0 0xc002401917 0xc002401918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tbjxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tbjxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tbjxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:46:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 11:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.212,StartTime:2020-07-19 11:46:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 11:46:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://e42476ba5bd4030cab08abcf9ad51a4ebc906a4ecc282b4eb3bd148c8cb512a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:46:40.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8749" for this suite. • [SLOW TEST:25.551 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":75,"skipped":1086,"failed":0} [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:46:40.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Jul 19 11:46:53.001: INFO: 5 pods remaining Jul 19 11:46:53.001: INFO: 5 pods has nil DeletionTimestamp Jul 19 11:46:53.001: INFO: STEP: Gathering metrics W0719 11:46:57.237527 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 11:46:57.237: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:46:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5539" for this suite. • [SLOW TEST:16.427 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":76,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:46:57.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:46:57.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1619' Jul 19 11:46:58.779: INFO: stderr: "" Jul 19 11:46:58.779: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jul 19 11:46:58.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1619' Jul 19 11:46:59.687: INFO: stderr: "" Jul 19 11:46:59.687: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 19 11:47:00.726: INFO: Selector matched 1 pods for map[app:agnhost] Jul 19 11:47:00.726: INFO: Found 0 / 1 Jul 19 11:47:01.695: INFO: Selector matched 1 pods for map[app:agnhost] Jul 19 11:47:01.695: INFO: Found 0 / 1 Jul 19 11:47:02.690: INFO: Selector matched 1 pods for map[app:agnhost] Jul 19 11:47:02.690: INFO: Found 1 / 1 Jul 19 11:47:02.690: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 19 11:47:02.713: INFO: Selector matched 1 pods for map[app:agnhost] Jul 19 11:47:02.713: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 19 11:47:02.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-294l7 --namespace=kubectl-1619' Jul 19 11:47:02.858: INFO: stderr: "" Jul 19 11:47:02.858: INFO: stdout: "Name: agnhost-master-294l7\nNamespace: kubectl-1619\nPriority: 0\nNode: jerma-worker2/172.18.0.10\nStart Time: Sun, 19 Jul 2020 11:46:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.175\nIPs:\n IP: 10.244.1.175\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://28e21df6dfcbef151545a7d692bd26a42a0ff9ff1bba398d2b6ed7ab4e954c3a\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 19 Jul 2020 11:47:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vh8ps (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vh8ps:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vh8ps\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1619/agnhost-master-294l7 to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Jul 19 11:47:02.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1619' Jul 19 11:47:02.997: INFO: stderr: "" Jul 19 11:47:02.997: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1619\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-294l7\n" Jul 19 11:47:02.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1619' Jul 19 11:47:03.128: INFO: stderr: "" Jul 19 11:47:03.128: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1619\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.30.143\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.175:6379\nSession Affinity: None\nEvents: \n" Jul 19 11:47:03.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Jul 19 11:47:03.246: INFO: stderr: "" Jul 19 11:47:03.246: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 19 Jul 2020 11:46:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 19 Jul 2020 11:42:46 +0000 Fri, 10 Jul 2020 10:25:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 19 Jul 2020 11:42:46 +0000 Fri, 10 Jul 2020 10:25:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 19 Jul 2020 11:42:46 +0000 Fri, 10 Jul 2020 10:25:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 19 Jul 2020 11:42:46 +0000 Fri, 10 Jul 2020 10:26:30 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.3\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 78cb62e1bd20401ebc9a91779e3da282\n System UUID: 5fa8becb-168a-4d58-8252-a288ac7a8260\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.17.5\n Kube-Proxy Version: v1.17.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-9rqh9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 9d\n kube-system coredns-6955765f44-bq97f 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 9d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kindnet-b87md 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 9d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-proxy-svrlv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n local-path-storage local-path-provisioner-58f6947c7-rkzsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 19 11:47:03.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1619' Jul 19 11:47:03.366: INFO: stderr: "" Jul 19 11:47:03.366: INFO: stdout: "Name: kubectl-1619\nLabels: e2e-framework=kubectl\n e2e-run=c184c73d-dc9a-4cfd-b3db-a9a480a9bd38\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:47:03.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1619" for this suite. • [SLOW TEST:6.153 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":77,"skipped":1142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:47:03.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 19 11:47:04.092: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:04.112: INFO: Number of nodes with available pods: 0 Jul 19 11:47:04.112: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:05.181: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:05.185: INFO: Number of nodes with available pods: 0 Jul 19 11:47:05.185: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:06.115: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:06.166: INFO: Number of nodes with available pods: 0 Jul 19 11:47:06.166: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:07.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:07.418: INFO: Number of nodes with available pods: 0 Jul 19 11:47:07.418: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:08.115: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:08.118: INFO: Number of nodes with available pods: 0 Jul 19 11:47:08.118: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:09.211: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:09.244: INFO: Number of nodes with available pods: 2 Jul 19 11:47:09.244: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 19 11:47:09.582: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:09.708: INFO: Number of nodes with available pods: 1 Jul 19 11:47:09.708: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:11.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:11.190: INFO: Number of nodes with available pods: 1 Jul 19 11:47:11.190: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:11.720: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:11.723: INFO: Number of nodes with available pods: 1 Jul 19 11:47:11.723: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:12.786: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:12.790: INFO: Number of nodes with available pods: 1 Jul 19 11:47:12.790: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:13.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:13.716: INFO: Number of nodes with available pods: 1 Jul 19 11:47:13.716: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:47:14.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:47:14.715: INFO: Number of nodes with available pods: 2 Jul 19 11:47:14.715: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5603, will wait for the garbage collector to delete the pods Jul 19 11:47:14.779: INFO: Deleting DaemonSet.extensions daemon-set took: 6.606914ms Jul 19 11:47:15.079: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.252549ms Jul 19 11:47:27.582: INFO: Number of nodes with available pods: 0 Jul 19 11:47:27.582: INFO: Number of running nodes: 0, number of available pods: 0 Jul 19 11:47:27.584: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5603/daemonsets","resourceVersion":"2411278"},"items":null} Jul 19 11:47:27.585: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5603/pods","resourceVersion":"2411278"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:47:27.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5603" for this suite. • [SLOW TEST:24.199 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":78,"skipped":1195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:47:27.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:47:27.724: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:47:34.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4131" for this suite. • [SLOW TEST:6.663 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":79,"skipped":1223,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:47:34.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9399.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9399.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 11:47:44.479: INFO: DNS probes using dns-9399/dns-test-eb93aa65-3b40-4469-a156-fecfb49ddb93 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:47:44.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9399" for this suite. • [SLOW TEST:10.883 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":80,"skipped":1226,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:47:45.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7016, will wait for the garbage collector to delete the pods Jul 19 11:47:55.980: INFO: Deleting Job.batch foo took: 5.278168ms Jul 19 11:47:56.380: INFO: Terminating Job.batch foo pods took: 400.292066ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:48:37.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7016" for this suite. • [SLOW TEST:52.564 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":81,"skipped":1237,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:48:37.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 19 11:48:37.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8851' Jul 19 11:48:37.937: INFO: stderr: "" Jul 19 11:48:37.938: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Jul 19 11:48:37.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8851' Jul 19 11:48:42.359: INFO: stderr: "" Jul 19 11:48:42.359: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:48:42.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8851" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":82,"skipped":1242,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:48:42.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jul 19 11:48:46.667: INFO: Pod pod-hostip-91be6ca8-366c-4a06-b44c-93dcc2a1e60f has hostIP: 172.18.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:48:46.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5678" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:48:46.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 19 11:48:46.807: INFO: Waiting up to 5m0s for pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b" in namespace "downward-api-3751" to be "success or failure" Jul 19 11:48:46.813: INFO: Pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29782ms Jul 19 11:48:49.033: INFO: Pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225542461s Jul 19 11:48:51.036: INFO: Pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.228985577s Jul 19 11:48:53.040: INFO: Pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232915604s STEP: Saw pod success Jul 19 11:48:53.040: INFO: Pod "downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b" satisfied condition "success or failure" Jul 19 11:48:53.043: INFO: Trying to get logs from node jerma-worker2 pod downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b container dapi-container: STEP: delete the pod Jul 19 11:48:53.077: INFO: Waiting for pod downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b to disappear Jul 19 11:48:53.089: INFO: Pod downward-api-aaa6b911-92d2-44a4-948c-9e24aae00e9b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:48:53.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3751" for this suite. • [SLOW TEST:6.422 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1297,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:48:53.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ebcd16aa-251d-45fa-8a88-ae01dbf236ed STEP: Creating a pod to test consume configMaps Jul 19 11:48:53.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac" in namespace "configmap-2960" to be "success or failure" Jul 19 11:48:53.191: INFO: Pod "pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.329327ms Jul 19 11:48:55.218: INFO: Pod "pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030150982s Jul 19 11:48:57.422: INFO: Pod "pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234091329s STEP: Saw pod success Jul 19 11:48:57.422: INFO: Pod "pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac" satisfied condition "success or failure" Jul 19 11:48:57.425: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac container configmap-volume-test: STEP: delete the pod Jul 19 11:48:57.638: INFO: Waiting for pod pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac to disappear Jul 19 11:48:57.652: INFO: Pod pod-configmaps-0bec559e-69bf-4063-b42b-a4d68a016eac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:48:57.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2960" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:48:57.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:48:58.012: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4662 I0719 11:48:58.035744 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4662, replica count: 1 I0719 11:48:59.086154 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:00.086429 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:01.086645 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:02.086894 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:03.087091 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:04.087358 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:49:05.087572 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 19 11:49:05.590: INFO: Created: latency-svc-rrk8z Jul 19 11:49:05.596: INFO: Got endpoints: latency-svc-rrk8z [409.07523ms] Jul 19 11:49:05.733: INFO: Created: latency-svc-2z2z2 Jul 19 11:49:05.750: INFO: Got endpoints: latency-svc-2z2z2 [153.292368ms] Jul 19 11:49:05.781: INFO: Created: latency-svc-f6jvg Jul 19 11:49:05.798: INFO: Got endpoints: latency-svc-f6jvg [201.235303ms] Jul 19 11:49:05.817: INFO: Created: latency-svc-8xjmr Jul 19 11:49:05.877: INFO: Got endpoints: latency-svc-8xjmr [280.059655ms] Jul 19 11:49:05.923: INFO: Created: latency-svc-w4ts4 Jul 19 11:49:05.936: INFO: Got endpoints: latency-svc-w4ts4 [339.976136ms] Jul 19 11:49:06.040: INFO: Created: latency-svc-jnwnl Jul 19 11:49:06.044: INFO: Got endpoints: latency-svc-jnwnl [447.647777ms] Jul 19 11:49:06.434: INFO: Created: latency-svc-mlzb4 Jul 19 11:49:06.496: INFO: Got endpoints: latency-svc-mlzb4 [899.002274ms] Jul 19 11:49:06.498: INFO: Created: latency-svc-dg7db Jul 19 11:49:06.531: INFO: Got endpoints: latency-svc-dg7db [934.723107ms] Jul 19 11:49:06.533: INFO: Created: latency-svc-r75k6 Jul 19 11:49:06.614: INFO: Got endpoints: latency-svc-r75k6 [1.017540822s] Jul 19 11:49:06.691: INFO: Created: latency-svc-x8hjh Jul 19 11:49:06.877: INFO: Got endpoints: latency-svc-x8hjh [1.280103571s] Jul 19 11:49:06.879: INFO: Created: latency-svc-8v6tr Jul 19 11:49:06.940: INFO: Got endpoints: latency-svc-8v6tr [1.343702574s] Jul 19 11:49:06.977: INFO: Created: latency-svc-bs4sd Jul 19 11:49:07.008: INFO: Got endpoints: latency-svc-bs4sd [1.411621125s] Jul 19 11:49:07.014: INFO: Created: latency-svc-rg58w Jul 19 11:49:07.029: INFO: Got endpoints: latency-svc-rg58w [1.432175256s] Jul 19 11:49:07.069: INFO: Created: latency-svc-ktpjp Jul 19 11:49:07.101: INFO: Got endpoints: latency-svc-ktpjp [1.504071189s] Jul 19 11:49:07.239: INFO: Created: latency-svc-858sj Jul 19 11:49:07.275: INFO: Got endpoints: latency-svc-858sj [1.678385747s] Jul 19 11:49:07.404: INFO: Created: latency-svc-6xr4d Jul 19 11:49:07.590: INFO: Got endpoints: latency-svc-6xr4d [1.992846781s] Jul 19 11:49:07.822: INFO: Created: latency-svc-sg4x4 Jul 19 11:49:07.825: INFO: Got endpoints: latency-svc-sg4x4 [2.074916421s] Jul 19 11:49:08.003: INFO: Created: latency-svc-4tqzd Jul 19 11:49:08.062: INFO: Got endpoints: latency-svc-4tqzd [2.264350186s] Jul 19 11:49:08.091: INFO: Created: latency-svc-krrhg Jul 19 11:49:08.176: INFO: Got endpoints: latency-svc-krrhg [2.2992176s] Jul 19 11:49:08.202: INFO: Created: latency-svc-bssmh Jul 19 11:49:08.229: INFO: Got endpoints: latency-svc-bssmh [2.29239629s] Jul 19 11:49:08.273: INFO: Created: latency-svc-jlx55 Jul 19 11:49:08.374: INFO: Got endpoints: latency-svc-jlx55 [2.330047319s] Jul 19 11:49:08.407: INFO: Created: latency-svc-6cjlv Jul 19 11:49:08.422: INFO: Got endpoints: latency-svc-6cjlv [1.925825785s] Jul 19 11:49:08.530: INFO: Created: latency-svc-9jsvd Jul 19 11:49:08.534: INFO: Got endpoints: latency-svc-9jsvd [2.002142847s] Jul 19 11:49:08.596: INFO: Created: latency-svc-d4ctm Jul 19 11:49:08.626: INFO: Got endpoints: latency-svc-d4ctm [2.011789658s] Jul 19 11:49:08.727: INFO: Created: latency-svc-9sl2t Jul 19 11:49:08.784: INFO: Got endpoints: latency-svc-9sl2t [1.906944436s] Jul 19 11:49:08.785: INFO: Created: latency-svc-nhrpk Jul 19 11:49:08.813: INFO: Got endpoints: latency-svc-nhrpk [1.872686642s] Jul 19 11:49:08.870: INFO: Created: latency-svc-4689p Jul 19 11:49:08.873: INFO: Got endpoints: latency-svc-4689p [1.864451649s] Jul 19 11:49:08.948: INFO: Created: latency-svc-zs28p Jul 19 11:49:09.008: INFO: Got endpoints: latency-svc-zs28p [1.979504217s] Jul 19 11:49:09.024: INFO: Created: latency-svc-hpl7h Jul 19 11:49:09.041: INFO: Got endpoints: latency-svc-hpl7h [1.940182643s] Jul 19 11:49:09.059: INFO: Created: latency-svc-p6rf5 Jul 19 11:49:09.071: INFO: Got endpoints: latency-svc-p6rf5 [1.795360007s] Jul 19 11:49:09.090: INFO: Created: latency-svc-tx8bx Jul 19 11:49:09.101: INFO: Got endpoints: latency-svc-tx8bx [1.511232695s] Jul 19 11:49:09.164: INFO: Created: latency-svc-xvcbn Jul 19 11:49:09.168: INFO: Got endpoints: latency-svc-xvcbn [1.342629067s] Jul 19 11:49:09.220: INFO: Created: latency-svc-sfffk Jul 19 11:49:09.233: INFO: Got endpoints: latency-svc-sfffk [1.171200531s] Jul 19 11:49:09.255: INFO: Created: latency-svc-dhf8l Jul 19 11:49:09.296: INFO: Got endpoints: latency-svc-dhf8l [1.119617162s] Jul 19 11:49:09.318: INFO: Created: latency-svc-jmnsr Jul 19 11:49:09.349: INFO: Got endpoints: latency-svc-jmnsr [1.119548652s] Jul 19 11:49:09.387: INFO: Created: latency-svc-7t5nw Jul 19 11:49:09.452: INFO: Got endpoints: latency-svc-7t5nw [1.077409209s] Jul 19 11:49:09.476: INFO: Created: latency-svc-vdkjm Jul 19 11:49:09.506: INFO: Got endpoints: latency-svc-vdkjm [1.084533037s] Jul 19 11:49:09.536: INFO: Created: latency-svc-5nfd8 Jul 19 11:49:09.595: INFO: Got endpoints: latency-svc-5nfd8 [1.061008752s] Jul 19 11:49:09.606: INFO: Created: latency-svc-9j7dl Jul 19 11:49:09.619: INFO: Got endpoints: latency-svc-9j7dl [993.183051ms] Jul 19 11:49:09.636: INFO: Created: latency-svc-wljqd Jul 19 11:49:09.650: INFO: Got endpoints: latency-svc-wljqd [865.618772ms] Jul 19 11:49:09.668: INFO: Created: latency-svc-rbvkz Jul 19 11:49:09.680: INFO: Got endpoints: latency-svc-rbvkz [866.411477ms] Jul 19 11:49:09.739: INFO: Created: latency-svc-v4vtw Jul 19 11:49:09.742: INFO: Got endpoints: latency-svc-v4vtw [869.087192ms] Jul 19 11:49:09.822: INFO: Created: latency-svc-rwr8b Jul 19 11:49:09.882: INFO: Got endpoints: latency-svc-rwr8b [874.013358ms] Jul 19 11:49:09.936: INFO: Created: latency-svc-z27f6 Jul 19 11:49:09.951: INFO: Got endpoints: latency-svc-z27f6 [909.453519ms] Jul 19 11:49:10.057: INFO: Created: latency-svc-dmk9w Jul 19 11:49:10.060: INFO: Got endpoints: latency-svc-dmk9w [989.633072ms] Jul 19 11:49:10.088: INFO: Created: latency-svc-xcn22 Jul 19 11:49:10.101: INFO: Got endpoints: latency-svc-xcn22 [999.616466ms] Jul 19 11:49:10.118: INFO: Created: latency-svc-cw4fm Jul 19 11:49:10.131: INFO: Got endpoints: latency-svc-cw4fm [963.646776ms] Jul 19 11:49:10.154: INFO: Created: latency-svc-24c4s Jul 19 11:49:10.207: INFO: Got endpoints: latency-svc-24c4s [973.824759ms] Jul 19 11:49:10.243: INFO: Created: latency-svc-j75c6 Jul 19 11:49:10.270: INFO: Got endpoints: latency-svc-j75c6 [973.89764ms] Jul 19 11:49:10.298: INFO: Created: latency-svc-n25h5 Jul 19 11:49:10.350: INFO: Got endpoints: latency-svc-n25h5 [1.001089716s] Jul 19 11:49:10.367: INFO: Created: latency-svc-44787 Jul 19 11:49:10.384: INFO: Got endpoints: latency-svc-44787 [932.115809ms] Jul 19 11:49:10.410: INFO: Created: latency-svc-4x8db Jul 19 11:49:10.423: INFO: Got endpoints: latency-svc-4x8db [917.20591ms] Jul 19 11:49:10.795: INFO: Created: latency-svc-867dz Jul 19 11:49:10.882: INFO: Got endpoints: latency-svc-867dz [1.28706746s] Jul 19 11:49:10.978: INFO: Created: latency-svc-np6dw Jul 19 11:49:10.990: INFO: Got endpoints: latency-svc-np6dw [1.370866668s] Jul 19 11:49:11.058: INFO: Created: latency-svc-hl8xd Jul 19 11:49:11.122: INFO: Got endpoints: latency-svc-hl8xd [1.472239197s] Jul 19 11:49:11.140: INFO: Created: latency-svc-ts6p5 Jul 19 11:49:11.153: INFO: Got endpoints: latency-svc-ts6p5 [1.473158103s] Jul 19 11:49:11.185: INFO: Created: latency-svc-cc82n Jul 19 11:49:11.207: INFO: Got endpoints: latency-svc-cc82n [1.465115535s] Jul 19 11:49:11.260: INFO: Created: latency-svc-dls2g Jul 19 11:49:11.263: INFO: Got endpoints: latency-svc-dls2g [1.380603359s] Jul 19 11:49:11.328: INFO: Created: latency-svc-bdddj Jul 19 11:49:11.345: INFO: Got endpoints: latency-svc-bdddj [1.394570449s] Jul 19 11:49:11.415: INFO: Created: latency-svc-wzmhs Jul 19 11:49:11.418: INFO: Got endpoints: latency-svc-wzmhs [1.357965266s] Jul 19 11:49:11.446: INFO: Created: latency-svc-kpzsr Jul 19 11:49:11.460: INFO: Got endpoints: latency-svc-kpzsr [1.35908752s] Jul 19 11:49:11.476: INFO: Created: latency-svc-zv9cg Jul 19 11:49:11.492: INFO: Got endpoints: latency-svc-zv9cg [1.360438451s] Jul 19 11:49:11.508: INFO: Created: latency-svc-4pwbx Jul 19 11:49:11.565: INFO: Got endpoints: latency-svc-4pwbx [1.357871498s] Jul 19 11:49:11.587: INFO: Created: latency-svc-6mdfp Jul 19 11:49:11.598: INFO: Got endpoints: latency-svc-6mdfp [1.328919599s] Jul 19 11:49:11.620: INFO: Created: latency-svc-txdx5 Jul 19 11:49:11.635: INFO: Got endpoints: latency-svc-txdx5 [1.285324476s] Jul 19 11:49:11.703: INFO: Created: latency-svc-7hlsw Jul 19 11:49:11.707: INFO: Got endpoints: latency-svc-7hlsw [1.322774496s] Jul 19 11:49:11.742: INFO: Created: latency-svc-g66jv Jul 19 11:49:11.755: INFO: Got endpoints: latency-svc-g66jv [1.331953946s] Jul 19 11:49:11.784: INFO: Created: latency-svc-jds7h Jul 19 11:49:11.791: INFO: Got endpoints: latency-svc-jds7h [909.530021ms] Jul 19 11:49:11.871: INFO: Created: latency-svc-x7k96 Jul 19 11:49:11.874: INFO: Got endpoints: latency-svc-x7k96 [884.136587ms] Jul 19 11:49:11.909: INFO: Created: latency-svc-g774w Jul 19 11:49:11.924: INFO: Got endpoints: latency-svc-g774w [802.012939ms] Jul 19 11:49:11.962: INFO: Created: latency-svc-pftmr Jul 19 11:49:12.052: INFO: Got endpoints: latency-svc-pftmr [899.439357ms] Jul 19 11:49:12.091: INFO: Created: latency-svc-9sscv Jul 19 11:49:12.111: INFO: Got endpoints: latency-svc-9sscv [903.663589ms] Jul 19 11:49:12.167: INFO: Created: latency-svc-fzh6x Jul 19 11:49:12.189: INFO: Got endpoints: latency-svc-fzh6x [925.957374ms] Jul 19 11:49:12.212: INFO: Created: latency-svc-6rggz Jul 19 11:49:12.225: INFO: Got endpoints: latency-svc-6rggz [880.037569ms] Jul 19 11:49:12.315: INFO: Created: latency-svc-k2q5d Jul 19 11:49:12.341: INFO: Created: latency-svc-j6tg5 Jul 19 11:49:12.341: INFO: Got endpoints: latency-svc-k2q5d [922.335089ms] Jul 19 11:49:12.378: INFO: Got endpoints: latency-svc-j6tg5 [917.944203ms] Jul 19 11:49:12.674: INFO: Created: latency-svc-jgp8j Jul 19 11:49:12.678: INFO: Got endpoints: latency-svc-jgp8j [1.186410424s] Jul 19 11:49:12.758: INFO: Created: latency-svc-mn99w Jul 19 11:49:12.773: INFO: Got endpoints: latency-svc-mn99w [1.207611078s] Jul 19 11:49:12.841: INFO: Created: latency-svc-dcnc8 Jul 19 11:49:12.985: INFO: Got endpoints: latency-svc-dcnc8 [1.386251754s] Jul 19 11:49:13.027: INFO: Created: latency-svc-8v22p Jul 19 11:49:13.042: INFO: Got endpoints: latency-svc-8v22p [1.406834743s] Jul 19 11:49:13.075: INFO: Created: latency-svc-twlvd Jul 19 11:49:13.140: INFO: Got endpoints: latency-svc-twlvd [1.432867424s] Jul 19 11:49:13.201: INFO: Created: latency-svc-9q9qg Jul 19 11:49:13.271: INFO: Got endpoints: latency-svc-9q9qg [1.515957201s] Jul 19 11:49:13.273: INFO: Created: latency-svc-bwpzh Jul 19 11:49:13.452: INFO: Got endpoints: latency-svc-bwpzh [1.66040021s] Jul 19 11:49:13.512: INFO: Created: latency-svc-xtm9b Jul 19 11:49:13.619: INFO: Got endpoints: latency-svc-xtm9b [1.744409003s] Jul 19 11:49:13.622: INFO: Created: latency-svc-pd2h2 Jul 19 11:49:13.649: INFO: Got endpoints: latency-svc-pd2h2 [1.724563088s] Jul 19 11:49:13.818: INFO: Created: latency-svc-2dkvj Jul 19 11:49:13.853: INFO: Got endpoints: latency-svc-2dkvj [1.800268213s] Jul 19 11:49:13.997: INFO: Created: latency-svc-ngrj7 Jul 19 11:49:14.001: INFO: Got endpoints: latency-svc-ngrj7 [1.890225362s] Jul 19 11:49:14.075: INFO: Created: latency-svc-2jv6q Jul 19 11:49:14.159: INFO: Got endpoints: latency-svc-2jv6q [1.96973178s] Jul 19 11:49:14.363: INFO: Created: latency-svc-c7bzv Jul 19 11:49:14.429: INFO: Got endpoints: latency-svc-c7bzv [2.203862032s] Jul 19 11:49:14.529: INFO: Created: latency-svc-9tfhc Jul 19 11:49:14.537: INFO: Got endpoints: latency-svc-9tfhc [2.196470675s] Jul 19 11:49:14.565: INFO: Created: latency-svc-45zdd Jul 19 11:49:14.579: INFO: Got endpoints: latency-svc-45zdd [2.201564196s] Jul 19 11:49:14.607: INFO: Created: latency-svc-9qhpm Jul 19 11:49:14.615: INFO: Got endpoints: latency-svc-9qhpm [1.936875958s] Jul 19 11:49:14.697: INFO: Created: latency-svc-cxhf5 Jul 19 11:49:14.713: INFO: Got endpoints: latency-svc-cxhf5 [1.939864781s] Jul 19 11:49:14.742: INFO: Created: latency-svc-268cm Jul 19 11:49:14.754: INFO: Got endpoints: latency-svc-268cm [1.768707319s] Jul 19 11:49:14.772: INFO: Created: latency-svc-ftrbq Jul 19 11:49:14.788: INFO: Got endpoints: latency-svc-ftrbq [1.745813735s] Jul 19 11:49:14.835: INFO: Created: latency-svc-bfwnt Jul 19 11:49:14.838: INFO: Got endpoints: latency-svc-bfwnt [1.698007103s] Jul 19 11:49:14.880: INFO: Created: latency-svc-2m5ls Jul 19 11:49:14.892: INFO: Got endpoints: latency-svc-2m5ls [1.620937587s] Jul 19 11:49:14.972: INFO: Created: latency-svc-fdfxm Jul 19 11:49:15.030: INFO: Got endpoints: latency-svc-fdfxm [1.578300257s] Jul 19 11:49:15.129: INFO: Created: latency-svc-mphqq Jul 19 11:49:15.131: INFO: Got endpoints: latency-svc-mphqq [1.51232787s] Jul 19 11:49:15.170: INFO: Created: latency-svc-hz5w8 Jul 19 11:49:15.175: INFO: Got endpoints: latency-svc-hz5w8 [1.526227756s] Jul 19 11:49:15.200: INFO: Created: latency-svc-j575h Jul 19 11:49:15.217: INFO: Got endpoints: latency-svc-j575h [1.364623017s] Jul 19 11:49:15.272: INFO: Created: latency-svc-8kvtw Jul 19 11:49:15.277: INFO: Got endpoints: latency-svc-8kvtw [1.276058945s] Jul 19 11:49:15.306: INFO: Created: latency-svc-92wqp Jul 19 11:49:15.325: INFO: Got endpoints: latency-svc-92wqp [1.166033934s] Jul 19 11:49:15.342: INFO: Created: latency-svc-k4qgb Jul 19 11:49:15.370: INFO: Got endpoints: latency-svc-k4qgb [940.267992ms] Jul 19 11:49:15.510: INFO: Created: latency-svc-c8r4t Jul 19 11:49:15.637: INFO: Got endpoints: latency-svc-c8r4t [1.099822637s] Jul 19 11:49:15.706: INFO: Created: latency-svc-dcbqr Jul 19 11:49:15.830: INFO: Got endpoints: latency-svc-dcbqr [1.250512286s] Jul 19 11:49:15.830: INFO: Created: latency-svc-8ms4l Jul 19 11:49:15.848: INFO: Got endpoints: latency-svc-8ms4l [1.232887009s] Jul 19 11:49:15.918: INFO: Created: latency-svc-sds7d Jul 19 11:49:16.027: INFO: Got endpoints: latency-svc-sds7d [1.314559547s] Jul 19 11:49:16.031: INFO: Created: latency-svc-vcxch Jul 19 11:49:16.094: INFO: Got endpoints: latency-svc-vcxch [1.340430278s] Jul 19 11:49:16.218: INFO: Created: latency-svc-87rhc Jul 19 11:49:16.227: INFO: Got endpoints: latency-svc-87rhc [1.438980923s] Jul 19 11:49:16.651: INFO: Created: latency-svc-68sss Jul 19 11:49:16.670: INFO: Got endpoints: latency-svc-68sss [1.832077649s] Jul 19 11:49:16.866: INFO: Created: latency-svc-gpdld Jul 19 11:49:16.880: INFO: Got endpoints: latency-svc-gpdld [1.987239088s] Jul 19 11:49:17.057: INFO: Created: latency-svc-6ktr5 Jul 19 11:49:17.084: INFO: Got endpoints: latency-svc-6ktr5 [2.053428458s] Jul 19 11:49:17.218: INFO: Created: latency-svc-84knc Jul 19 11:49:17.252: INFO: Got endpoints: latency-svc-84knc [2.120965981s] Jul 19 11:49:17.305: INFO: Created: latency-svc-48dkg Jul 19 11:49:17.392: INFO: Got endpoints: latency-svc-48dkg [2.217237245s] Jul 19 11:49:17.433: INFO: Created: latency-svc-vmmw7 Jul 19 11:49:17.444: INFO: Got endpoints: latency-svc-vmmw7 [2.226790909s] Jul 19 11:49:17.595: INFO: Created: latency-svc-5hbhp Jul 19 11:49:17.599: INFO: Got endpoints: latency-svc-5hbhp [2.321310382s] Jul 19 11:49:17.683: INFO: Created: latency-svc-cdwth Jul 19 11:49:17.739: INFO: Got endpoints: latency-svc-cdwth [2.413734782s] Jul 19 11:49:17.742: INFO: Created: latency-svc-ptfjx Jul 19 11:49:17.756: INFO: Got endpoints: latency-svc-ptfjx [2.386772603s] Jul 19 11:49:17.825: INFO: Created: latency-svc-hxj6d Jul 19 11:49:17.889: INFO: Got endpoints: latency-svc-hxj6d [2.251882467s] Jul 19 11:49:17.893: INFO: Created: latency-svc-k49jz Jul 19 11:49:17.901: INFO: Got endpoints: latency-svc-k49jz [2.07075761s] Jul 19 11:49:17.927: INFO: Created: latency-svc-qcqw2 Jul 19 11:49:17.937: INFO: Got endpoints: latency-svc-qcqw2 [2.088771669s] Jul 19 11:49:17.969: INFO: Created: latency-svc-svxvz Jul 19 11:49:17.979: INFO: Got endpoints: latency-svc-svxvz [1.951871294s] Jul 19 11:49:18.032: INFO: Created: latency-svc-lz7nm Jul 19 11:49:18.050: INFO: Got endpoints: latency-svc-lz7nm [1.955922878s] Jul 19 11:49:18.119: INFO: Created: latency-svc-zlc2d Jul 19 11:49:18.183: INFO: Got endpoints: latency-svc-zlc2d [1.955939188s] Jul 19 11:49:18.206: INFO: Created: latency-svc-9w5ph Jul 19 11:49:18.220: INFO: Got endpoints: latency-svc-9w5ph [1.550378828s] Jul 19 11:49:18.249: INFO: Created: latency-svc-78dt7 Jul 19 11:49:18.262: INFO: Got endpoints: latency-svc-78dt7 [1.382097137s] Jul 19 11:49:18.350: INFO: Created: latency-svc-4dmsx Jul 19 11:49:18.394: INFO: Got endpoints: latency-svc-4dmsx [1.310562565s] Jul 19 11:49:18.395: INFO: Created: latency-svc-k6t8s Jul 19 11:49:18.419: INFO: Got endpoints: latency-svc-k6t8s [1.166413774s] Jul 19 11:49:18.494: INFO: Created: latency-svc-lwjtm Jul 19 11:49:18.497: INFO: Got endpoints: latency-svc-lwjtm [1.105081928s] Jul 19 11:49:18.722: INFO: Created: latency-svc-hc5xk Jul 19 11:49:18.727: INFO: Got endpoints: latency-svc-hc5xk [1.282564484s] Jul 19 11:49:18.944: INFO: Created: latency-svc-rprcq Jul 19 11:49:18.959: INFO: Got endpoints: latency-svc-rprcq [1.359873639s] Jul 19 11:49:18.984: INFO: Created: latency-svc-mwbcs Jul 19 11:49:18.995: INFO: Got endpoints: latency-svc-mwbcs [1.256050403s] Jul 19 11:49:19.019: INFO: Created: latency-svc-8xddv Jul 19 11:49:19.135: INFO: Got endpoints: latency-svc-8xddv [1.37832348s] Jul 19 11:49:19.500: INFO: Created: latency-svc-2qb6h Jul 19 11:49:19.506: INFO: Got endpoints: latency-svc-2qb6h [1.617013963s] Jul 19 11:49:19.865: INFO: Created: latency-svc-76v2w Jul 19 11:49:19.927: INFO: Got endpoints: latency-svc-76v2w [2.026698971s] Jul 19 11:49:20.239: INFO: Created: latency-svc-n49rh Jul 19 11:49:20.242: INFO: Got endpoints: latency-svc-n49rh [2.304869162s] Jul 19 11:49:20.626: INFO: Created: latency-svc-tnmln Jul 19 11:49:20.878: INFO: Got endpoints: latency-svc-tnmln [2.898389541s] Jul 19 11:49:20.953: INFO: Created: latency-svc-88x6n Jul 19 11:49:20.975: INFO: Got endpoints: latency-svc-88x6n [2.92511891s] Jul 19 11:49:21.153: INFO: Created: latency-svc-5mtmc Jul 19 11:49:21.884: INFO: Got endpoints: latency-svc-5mtmc [3.701094923s] Jul 19 11:49:22.356: INFO: Created: latency-svc-fghsp Jul 19 11:49:22.494: INFO: Got endpoints: latency-svc-fghsp [4.274179859s] Jul 19 11:49:23.034: INFO: Created: latency-svc-hbblb Jul 19 11:49:23.309: INFO: Got endpoints: latency-svc-hbblb [5.047449688s] Jul 19 11:49:23.903: INFO: Created: latency-svc-ng878 Jul 19 11:49:23.905: INFO: Got endpoints: latency-svc-ng878 [5.51098078s] Jul 19 11:49:24.196: INFO: Created: latency-svc-cq8z8 Jul 19 11:49:24.572: INFO: Got endpoints: latency-svc-cq8z8 [6.153129803s] Jul 19 11:49:24.612: INFO: Created: latency-svc-ptwsh Jul 19 11:49:24.658: INFO: Got endpoints: latency-svc-ptwsh [6.161004767s] Jul 19 11:49:24.981: INFO: Created: latency-svc-lfxcr Jul 19 11:49:25.261: INFO: Got endpoints: latency-svc-lfxcr [6.534412593s] Jul 19 11:49:25.446: INFO: Created: latency-svc-kznf5 Jul 19 11:49:25.506: INFO: Got endpoints: latency-svc-kznf5 [6.547710072s] Jul 19 11:49:25.758: INFO: Created: latency-svc-jzj9v Jul 19 11:49:25.761: INFO: Got endpoints: latency-svc-jzj9v [6.76658582s] Jul 19 11:49:25.854: INFO: Created: latency-svc-4tg98 Jul 19 11:49:25.854: INFO: Got endpoints: latency-svc-4tg98 [6.719213321s] Jul 19 11:49:25.962: INFO: Created: latency-svc-5bxhb Jul 19 11:49:26.014: INFO: Got endpoints: latency-svc-5bxhb [6.507819271s] Jul 19 11:49:26.059: INFO: Created: latency-svc-4tg8m Jul 19 11:49:26.164: INFO: Got endpoints: latency-svc-4tg8m [6.23633692s] Jul 19 11:49:26.331: INFO: Created: latency-svc-r2psv Jul 19 11:49:26.757: INFO: Got endpoints: latency-svc-r2psv [6.515389504s] Jul 19 11:49:26.915: INFO: Created: latency-svc-xdlc6 Jul 19 11:49:27.092: INFO: Got endpoints: latency-svc-xdlc6 [6.214151252s] Jul 19 11:49:27.321: INFO: Created: latency-svc-4z6nv Jul 19 11:49:27.326: INFO: Got endpoints: latency-svc-4z6nv [6.350543358s] Jul 19 11:49:27.414: INFO: Created: latency-svc-6v7jj Jul 19 11:49:27.583: INFO: Got endpoints: latency-svc-6v7jj [5.698938969s] Jul 19 11:49:27.642: INFO: Created: latency-svc-jqn6z Jul 19 11:49:27.668: INFO: Got endpoints: latency-svc-jqn6z [5.17339071s] Jul 19 11:49:28.021: INFO: Created: latency-svc-2xncb Jul 19 11:49:28.217: INFO: Got endpoints: latency-svc-2xncb [4.907277073s] Jul 19 11:49:28.269: INFO: Created: latency-svc-tlqdr Jul 19 11:49:28.303: INFO: Created: latency-svc-lddjf Jul 19 11:49:28.303: INFO: Got endpoints: latency-svc-tlqdr [4.39740617s] Jul 19 11:49:28.386: INFO: Got endpoints: latency-svc-lddjf [3.813595105s] Jul 19 11:49:28.419: INFO: Created: latency-svc-jsfrg Jul 19 11:49:28.431: INFO: Got endpoints: latency-svc-jsfrg [3.772501813s] Jul 19 11:49:28.566: INFO: Created: latency-svc-ph2hd Jul 19 11:49:28.569: INFO: Got endpoints: latency-svc-ph2hd [3.307831694s] Jul 19 11:49:28.630: INFO: Created: latency-svc-66r7q Jul 19 11:49:28.751: INFO: Got endpoints: latency-svc-66r7q [3.244681829s] Jul 19 11:49:28.764: INFO: Created: latency-svc-pjsml Jul 19 11:49:28.772: INFO: Got endpoints: latency-svc-pjsml [3.010905567s] Jul 19 11:49:28.795: INFO: Created: latency-svc-v845p Jul 19 11:49:28.803: INFO: Got endpoints: latency-svc-v845p [2.949081255s] Jul 19 11:49:28.833: INFO: Created: latency-svc-88msc Jul 19 11:49:28.836: INFO: Got endpoints: latency-svc-88msc [2.821981481s] Jul 19 11:49:28.901: INFO: Created: latency-svc-tqbcl Jul 19 11:49:28.904: INFO: Got endpoints: latency-svc-tqbcl [2.739937185s] Jul 19 11:49:28.956: INFO: Created: latency-svc-fnmv5 Jul 19 11:49:28.985: INFO: Got endpoints: latency-svc-fnmv5 [2.227431223s] Jul 19 11:49:29.038: INFO: Created: latency-svc-mqqmn Jul 19 11:49:29.063: INFO: Got endpoints: latency-svc-mqqmn [1.970628286s] Jul 19 11:49:29.110: INFO: Created: latency-svc-pbsmf Jul 19 11:49:29.176: INFO: Got endpoints: latency-svc-pbsmf [1.849814477s] Jul 19 11:49:29.210: INFO: Created: latency-svc-gmhzg Jul 19 11:49:29.231: INFO: Got endpoints: latency-svc-gmhzg [1.647843113s] Jul 19 11:49:29.656: INFO: Created: latency-svc-zt4vs Jul 19 11:49:29.660: INFO: Got endpoints: latency-svc-zt4vs [1.991722553s] Jul 19 11:49:29.715: INFO: Created: latency-svc-dxj52 Jul 19 11:49:29.729: INFO: Got endpoints: latency-svc-dxj52 [1.512377563s] Jul 19 11:49:29.751: INFO: Created: latency-svc-t5cfx Jul 19 11:49:29.839: INFO: Got endpoints: latency-svc-t5cfx [1.536462026s] Jul 19 11:49:29.843: INFO: Created: latency-svc-cnxdf Jul 19 11:49:29.848: INFO: Got endpoints: latency-svc-cnxdf [1.462662612s] Jul 19 11:49:29.875: INFO: Created: latency-svc-9zd2r Jul 19 11:49:29.891: INFO: Got endpoints: latency-svc-9zd2r [1.459896693s] Jul 19 11:49:29.913: INFO: Created: latency-svc-rt4zm Jul 19 11:49:30.015: INFO: Got endpoints: latency-svc-rt4zm [1.445860633s] Jul 19 11:49:30.024: INFO: Created: latency-svc-4zzz2 Jul 19 11:49:30.054: INFO: Got endpoints: latency-svc-4zzz2 [1.303160572s] Jul 19 11:49:30.097: INFO: Created: latency-svc-59vg4 Jul 19 11:49:30.114: INFO: Got endpoints: latency-svc-59vg4 [1.341443114s] Jul 19 11:49:30.177: INFO: Created: latency-svc-zbtqb Jul 19 11:49:30.186: INFO: Got endpoints: latency-svc-zbtqb [1.382679121s] Jul 19 11:49:30.225: INFO: Created: latency-svc-nqgfd Jul 19 11:49:30.252: INFO: Got endpoints: latency-svc-nqgfd [1.416089329s] Jul 19 11:49:30.326: INFO: Created: latency-svc-8kfrv Jul 19 11:49:30.337: INFO: Got endpoints: latency-svc-8kfrv [1.433338746s] Jul 19 11:49:30.423: INFO: Created: latency-svc-jnx5b Jul 19 11:49:30.512: INFO: Got endpoints: latency-svc-jnx5b [1.526832094s] Jul 19 11:49:30.596: INFO: Created: latency-svc-5zf42 Jul 19 11:49:30.649: INFO: Got endpoints: latency-svc-5zf42 [1.586359043s] Jul 19 11:49:30.669: INFO: Created: latency-svc-b77jh Jul 19 11:49:30.709: INFO: Got endpoints: latency-svc-b77jh [1.533069626s] Jul 19 11:49:30.733: INFO: Created: latency-svc-tdcb9 Jul 19 11:49:30.745: INFO: Got endpoints: latency-svc-tdcb9 [1.513866016s] Jul 19 11:49:30.811: INFO: Created: latency-svc-dgrk2 Jul 19 11:49:30.815: INFO: Got endpoints: latency-svc-dgrk2 [1.155040896s] Jul 19 11:49:30.867: INFO: Created: latency-svc-7hxrn Jul 19 11:49:30.885: INFO: Got endpoints: latency-svc-7hxrn [1.155414531s] Jul 19 11:49:30.961: INFO: Created: latency-svc-ssd7z Jul 19 11:49:30.964: INFO: Got endpoints: latency-svc-ssd7z [1.124472817s] Jul 19 11:49:30.997: INFO: Created: latency-svc-w9kjk Jul 19 11:49:31.010: INFO: Got endpoints: latency-svc-w9kjk [1.161475132s] Jul 19 11:49:31.034: INFO: Created: latency-svc-ml7fj Jul 19 11:49:31.046: INFO: Got endpoints: latency-svc-ml7fj [1.155214539s] Jul 19 11:49:31.098: INFO: Created: latency-svc-crqck Jul 19 11:49:31.113: INFO: Got endpoints: latency-svc-crqck [1.097459353s] Jul 19 11:49:31.160: INFO: Created: latency-svc-rb9q7 Jul 19 11:49:31.191: INFO: Got endpoints: latency-svc-rb9q7 [1.136759037s] Jul 19 11:49:31.260: INFO: Created: latency-svc-p6vwr Jul 19 11:49:31.288: INFO: Got endpoints: latency-svc-p6vwr [1.17375069s] Jul 19 11:49:31.327: INFO: Created: latency-svc-pp8tq Jul 19 11:49:31.341: INFO: Got endpoints: latency-svc-pp8tq [1.155419705s] Jul 19 11:49:31.416: INFO: Created: latency-svc-rdk7m Jul 19 11:49:31.441: INFO: Got endpoints: latency-svc-rdk7m [1.188304855s] Jul 19 11:49:31.483: INFO: Created: latency-svc-thpp2 Jul 19 11:49:31.499: INFO: Got endpoints: latency-svc-thpp2 [1.161287486s] Jul 19 11:49:31.559: INFO: Created: latency-svc-6xp6z Jul 19 11:49:31.562: INFO: Got endpoints: latency-svc-6xp6z [1.050693311s] Jul 19 11:49:31.597: INFO: Created: latency-svc-z2l72 Jul 19 11:49:31.613: INFO: Got endpoints: latency-svc-z2l72 [963.842345ms] Jul 19 11:49:31.639: INFO: Created: latency-svc-q5pvv Jul 19 11:49:31.715: INFO: Got endpoints: latency-svc-q5pvv [1.005821921s] Jul 19 11:49:31.717: INFO: Created: latency-svc-tg56h Jul 19 11:49:31.721: INFO: Got endpoints: latency-svc-tg56h [976.484828ms] Jul 19 11:49:31.743: INFO: Created: latency-svc-hwphr Jul 19 11:49:31.758: INFO: Got endpoints: latency-svc-hwphr [943.117989ms] Jul 19 11:49:31.758: INFO: Latencies: [153.292368ms 201.235303ms 280.059655ms 339.976136ms 447.647777ms 802.012939ms 865.618772ms 866.411477ms 869.087192ms 874.013358ms 880.037569ms 884.136587ms 899.002274ms 899.439357ms 903.663589ms 909.453519ms 909.530021ms 917.20591ms 917.944203ms 922.335089ms 925.957374ms 932.115809ms 934.723107ms 940.267992ms 943.117989ms 963.646776ms 963.842345ms 973.824759ms 973.89764ms 976.484828ms 989.633072ms 993.183051ms 999.616466ms 1.001089716s 1.005821921s 1.017540822s 1.050693311s 1.061008752s 1.077409209s 1.084533037s 1.097459353s 1.099822637s 1.105081928s 1.119548652s 1.119617162s 1.124472817s 1.136759037s 1.155040896s 1.155214539s 1.155414531s 1.155419705s 1.161287486s 1.161475132s 1.166033934s 1.166413774s 1.171200531s 1.17375069s 1.186410424s 1.188304855s 1.207611078s 1.232887009s 1.250512286s 1.256050403s 1.276058945s 1.280103571s 1.282564484s 1.285324476s 1.28706746s 1.303160572s 1.310562565s 1.314559547s 1.322774496s 1.328919599s 1.331953946s 1.340430278s 1.341443114s 1.342629067s 1.343702574s 1.357871498s 1.357965266s 1.35908752s 1.359873639s 1.360438451s 1.364623017s 1.370866668s 1.37832348s 1.380603359s 1.382097137s 1.382679121s 1.386251754s 1.394570449s 1.406834743s 1.411621125s 1.416089329s 1.432175256s 1.432867424s 1.433338746s 1.438980923s 1.445860633s 1.459896693s 1.462662612s 1.465115535s 1.472239197s 1.473158103s 1.504071189s 1.511232695s 1.51232787s 1.512377563s 1.513866016s 1.515957201s 1.526227756s 1.526832094s 1.533069626s 1.536462026s 1.550378828s 1.578300257s 1.586359043s 1.617013963s 1.620937587s 1.647843113s 1.66040021s 1.678385747s 1.698007103s 1.724563088s 1.744409003s 1.745813735s 1.768707319s 1.795360007s 1.800268213s 1.832077649s 1.849814477s 1.864451649s 1.872686642s 1.890225362s 1.906944436s 1.925825785s 1.936875958s 1.939864781s 1.940182643s 1.951871294s 1.955922878s 1.955939188s 1.96973178s 1.970628286s 1.979504217s 1.987239088s 1.991722553s 1.992846781s 2.002142847s 2.011789658s 2.026698971s 2.053428458s 2.07075761s 2.074916421s 2.088771669s 2.120965981s 2.196470675s 2.201564196s 2.203862032s 2.217237245s 2.226790909s 2.227431223s 2.251882467s 2.264350186s 2.29239629s 2.2992176s 2.304869162s 2.321310382s 2.330047319s 2.386772603s 2.413734782s 2.739937185s 2.821981481s 2.898389541s 2.92511891s 2.949081255s 3.010905567s 3.244681829s 3.307831694s 3.701094923s 3.772501813s 3.813595105s 4.274179859s 4.39740617s 4.907277073s 5.047449688s 5.17339071s 5.51098078s 5.698938969s 6.153129803s 6.161004767s 6.214151252s 6.23633692s 6.350543358s 6.507819271s 6.515389504s 6.534412593s 6.547710072s 6.719213321s 6.76658582s] Jul 19 11:49:31.758: INFO: 50 %ile: 1.462662612s Jul 19 11:49:31.758: INFO: 90 %ile: 3.772501813s Jul 19 11:49:31.758: INFO: 99 %ile: 6.719213321s Jul 19 11:49:31.758: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:49:31.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4662" for this suite. • [SLOW TEST:34.062 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":86,"skipped":1368,"failed":0} SSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:49:31.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jul 19 11:49:31.870: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-94" to be "success or failure" Jul 19 11:49:31.877: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081746ms Jul 19 11:49:33.973: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102504133s Jul 19 11:49:35.976: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106098218s Jul 19 11:49:38.033: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163061122s Jul 19 11:49:40.125: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.254261739s Jul 19 11:49:42.147: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276299521s STEP: Saw pod success Jul 19 11:49:42.147: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 19 11:49:42.149: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 19 11:49:42.336: INFO: Waiting for pod pod-host-path-test to disappear Jul 19 11:49:42.342: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:49:42.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-94" for this suite. • [SLOW TEST:10.584 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1376,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:49:42.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:49:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2604" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":88,"skipped":1386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:49:42.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jul 19 11:49:43.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-936' Jul 19 11:49:44.326: INFO: stderr: "" Jul 19 11:49:44.327: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 19 11:49:44.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:49:44.607: INFO: stderr: "" Jul 19 11:49:44.607: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jul 19 11:49:49.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:49:49.809: INFO: stderr: "" Jul 19 11:49:49.809: INFO: stdout: "update-demo-nautilus-7c5mw update-demo-nautilus-sbzs8 " Jul 19 11:49:49.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c5mw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:49:49.931: INFO: stderr: "" Jul 19 11:49:49.931: INFO: stdout: "" Jul 19 11:49:49.931: INFO: update-demo-nautilus-7c5mw is created but not running Jul 19 11:49:54.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:49:55.077: INFO: stderr: "" Jul 19 11:49:55.077: INFO: stdout: "update-demo-nautilus-7c5mw update-demo-nautilus-sbzs8 " Jul 19 11:49:55.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c5mw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:49:55.222: INFO: stderr: "" Jul 19 11:49:55.222: INFO: stdout: "true" Jul 19 11:49:55.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c5mw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:49:55.361: INFO: stderr: "" Jul 19 11:49:55.361: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 11:49:55.361: INFO: validating pod update-demo-nautilus-7c5mw Jul 19 11:49:55.373: INFO: got data: { "image": "nautilus.jpg" } Jul 19 11:49:55.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 11:49:55.373: INFO: update-demo-nautilus-7c5mw is verified up and running Jul 19 11:49:55.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:49:55.498: INFO: stderr: "" Jul 19 11:49:55.498: INFO: stdout: "true" Jul 19 11:49:55.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:49:55.589: INFO: stderr: "" Jul 19 11:49:55.589: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 11:49:55.589: INFO: validating pod update-demo-nautilus-sbzs8 Jul 19 11:49:55.623: INFO: got data: { "image": "nautilus.jpg" } Jul 19 11:49:55.623: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 11:49:55.623: INFO: update-demo-nautilus-sbzs8 is verified up and running STEP: scaling down the replication controller Jul 19 11:49:55.624: INFO: scanned /root for discovery docs: Jul 19 11:49:55.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-936' Jul 19 11:49:57.080: INFO: stderr: "" Jul 19 11:49:57.080: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 19 11:49:57.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:49:57.654: INFO: stderr: "" Jul 19 11:49:57.654: INFO: stdout: "update-demo-nautilus-7c5mw update-demo-nautilus-sbzs8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 19 11:50:02.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:50:02.779: INFO: stderr: "" Jul 19 11:50:02.779: INFO: stdout: "update-demo-nautilus-7c5mw update-demo-nautilus-sbzs8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 19 11:50:07.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:50:07.885: INFO: stderr: "" Jul 19 11:50:07.885: INFO: stdout: "update-demo-nautilus-sbzs8 " Jul 19 11:50:07.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:07.967: INFO: stderr: "" Jul 19 11:50:07.967: INFO: stdout: "true" Jul 19 11:50:07.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:08.062: INFO: stderr: "" Jul 19 11:50:08.062: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 11:50:08.062: INFO: validating pod update-demo-nautilus-sbzs8 Jul 19 11:50:08.083: INFO: got data: { "image": "nautilus.jpg" } Jul 19 11:50:08.083: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 11:50:08.083: INFO: update-demo-nautilus-sbzs8 is verified up and running STEP: scaling up the replication controller Jul 19 11:50:08.085: INFO: scanned /root for discovery docs: Jul 19 11:50:08.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-936' Jul 19 11:50:09.527: INFO: stderr: "" Jul 19 11:50:09.527: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 19 11:50:09.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:50:09.772: INFO: stderr: "" Jul 19 11:50:09.772: INFO: stdout: "update-demo-nautilus-mz29q update-demo-nautilus-sbzs8 " Jul 19 11:50:09.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz29q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:09.997: INFO: stderr: "" Jul 19 11:50:09.997: INFO: stdout: "" Jul 19 11:50:09.997: INFO: update-demo-nautilus-mz29q is created but not running Jul 19 11:50:14.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-936' Jul 19 11:50:15.104: INFO: stderr: "" Jul 19 11:50:15.105: INFO: stdout: "update-demo-nautilus-mz29q update-demo-nautilus-sbzs8 " Jul 19 11:50:15.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz29q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:15.193: INFO: stderr: "" Jul 19 11:50:15.193: INFO: stdout: "true" Jul 19 11:50:15.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz29q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:15.287: INFO: stderr: "" Jul 19 11:50:15.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 11:50:15.287: INFO: validating pod update-demo-nautilus-mz29q Jul 19 11:50:15.291: INFO: got data: { "image": "nautilus.jpg" } Jul 19 11:50:15.291: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 11:50:15.291: INFO: update-demo-nautilus-mz29q is verified up and running Jul 19 11:50:15.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:15.374: INFO: stderr: "" Jul 19 11:50:15.374: INFO: stdout: "true" Jul 19 11:50:15.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbzs8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-936' Jul 19 11:50:15.468: INFO: stderr: "" Jul 19 11:50:15.468: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 11:50:15.468: INFO: validating pod update-demo-nautilus-sbzs8 Jul 19 11:50:15.471: INFO: got data: { "image": "nautilus.jpg" } Jul 19 11:50:15.471: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 11:50:15.471: INFO: update-demo-nautilus-sbzs8 is verified up and running STEP: using delete to clean up resources Jul 19 11:50:15.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-936' Jul 19 11:50:15.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 11:50:15.607: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 19 11:50:15.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-936' Jul 19 11:50:15.700: INFO: stderr: "No resources found in kubectl-936 namespace.\n" Jul 19 11:50:15.700: INFO: stdout: "" Jul 19 11:50:15.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-936 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 19 11:50:15.800: INFO: stderr: "" Jul 19 11:50:15.800: INFO: stdout: "update-demo-nautilus-mz29q\nupdate-demo-nautilus-sbzs8\n" Jul 19 11:50:16.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-936' Jul 19 11:50:16.394: INFO: stderr: "No resources found in kubectl-936 namespace.\n" Jul 19 11:50:16.394: INFO: stdout: "" Jul 19 11:50:16.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-936 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 19 11:50:16.490: INFO: stderr: "" Jul 19 11:50:16.490: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:50:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-936" for this suite. • [SLOW TEST:33.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":89,"skipped":1413,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:50:16.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b448ee5e-1454-4ca3-ab1b-eb7d0412c6c3 STEP: Creating a pod to test consume configMaps Jul 19 11:50:16.909: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e" in namespace "projected-8645" to be "success or failure" Jul 19 11:50:16.937: INFO: Pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.393991ms Jul 19 11:50:18.941: INFO: Pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032051778s Jul 19 11:50:20.945: INFO: Pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e": Phase="Running", Reason="", readiness=true. Elapsed: 4.036581565s Jul 19 11:50:22.949: INFO: Pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040713838s STEP: Saw pod success Jul 19 11:50:22.949: INFO: Pod "pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e" satisfied condition "success or failure" Jul 19 11:50:22.953: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e container projected-configmap-volume-test: STEP: delete the pod Jul 19 11:50:22.971: INFO: Waiting for pod pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e to disappear Jul 19 11:50:22.975: INFO: Pod pod-projected-configmaps-3afaee73-09cd-45b4-8c68-6fcd9ffb982e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:50:22.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8645" for this suite. • [SLOW TEST:6.485 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1420,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:50:22.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 19 11:50:35.345: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:35.363: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:37.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:37.423: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:39.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:39.366: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:41.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:41.366: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:43.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:43.417: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:45.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:45.366: INFO: Pod pod-with-poststart-http-hook still exists Jul 19 11:50:47.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 19 11:50:47.366: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:50:47.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3155" for this suite. • [SLOW TEST:24.395 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1438,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:50:47.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:50:48.358: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:50:50.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:50:52.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756248, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:50:55.535: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jul 19 11:50:59.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1524 to-be-attached-pod -i -c=container1' Jul 19 11:51:03.956: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:51:03.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1524" for this suite. STEP: Destroying namespace "webhook-1524-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":92,"skipped":1452,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:51:04.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 19 11:51:04.684: INFO: Waiting up to 5m0s for pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b" in namespace "emptydir-527" to be "success or failure" Jul 19 11:51:04.758: INFO: Pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b": Phase="Pending", Reason="", readiness=false. Elapsed: 74.048989ms Jul 19 11:51:06.761: INFO: Pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076754078s Jul 19 11:51:08.914: INFO: Pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230103909s Jul 19 11:51:10.917: INFO: Pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.233399963s STEP: Saw pod success Jul 19 11:51:10.917: INFO: Pod "pod-53d44de7-fb47-45ff-8fdd-8a163858d30b" satisfied condition "success or failure" Jul 19 11:51:10.920: INFO: Trying to get logs from node jerma-worker2 pod pod-53d44de7-fb47-45ff-8fdd-8a163858d30b container test-container: STEP: delete the pod Jul 19 11:51:10.973: INFO: Waiting for pod pod-53d44de7-fb47-45ff-8fdd-8a163858d30b to disappear Jul 19 11:51:11.126: INFO: Pod pod-53d44de7-fb47-45ff-8fdd-8a163858d30b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:51:11.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-527" for this suite. • [SLOW TEST:6.625 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1458,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:51:11.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:51:11.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca" in namespace "downward-api-7609" to be "success or failure" Jul 19 11:51:11.489: INFO: Pod "downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca": Phase="Pending", Reason="", readiness=false. Elapsed: 55.55879ms Jul 19 11:51:13.493: INFO: Pod "downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059579252s Jul 19 11:51:15.496: INFO: Pod "downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06255214s STEP: Saw pod success Jul 19 11:51:15.496: INFO: Pod "downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca" satisfied condition "success or failure" Jul 19 11:51:15.498: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca container client-container: STEP: delete the pod Jul 19 11:51:15.798: INFO: Waiting for pod downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca to disappear Jul 19 11:51:15.890: INFO: Pod downwardapi-volume-e0c1c64e-0c50-42cc-a268-136888867aca no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:51:15.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7609" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1462,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:51:15.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331 Jul 19 11:51:16.086: INFO: Pod name my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331: Found 0 pods out of 1 Jul 19 11:51:21.089: INFO: Pod name my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331: Found 1 pods out of 1 Jul 19 11:51:21.089: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331" are running Jul 19 11:51:21.094: INFO: Pod "my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331-pqp4p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 11:51:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 11:51:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 11:51:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 11:51:16 +0000 UTC Reason: Message:}]) Jul 19 11:51:21.094: INFO: Trying to dial the pod Jul 19 11:51:26.104: INFO: Controller my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331: Got expected result from replica 1 [my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331-pqp4p]: "my-hostname-basic-1d3aca6e-3a07-4445-bc32-2d5b03efd331-pqp4p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:51:26.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8807" for this suite. • [SLOW TEST:10.200 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":95,"skipped":1464,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:51:26.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 19 11:51:26.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-486' Jul 19 11:51:26.312: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 19 11:51:26.312: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jul 19 11:51:26.342: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jul 19 11:51:26.371: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 19 11:51:26.523: INFO: scanned /root for discovery docs: Jul 19 11:51:26.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-486' Jul 19 11:51:42.597: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 19 11:51:42.597: INFO: stdout: "Created e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b\nScaling up e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jul 19 11:51:42.597: INFO: stdout: "Created e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b\nScaling up e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jul 19 11:51:42.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-486' Jul 19 11:51:42.709: INFO: stderr: "" Jul 19 11:51:42.709: INFO: stdout: "e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b-258ds e2e-test-httpd-rc-xzntd " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Jul 19 11:51:47.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-486' Jul 19 11:51:47.815: INFO: stderr: "" Jul 19 11:51:47.816: INFO: stdout: "e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b-258ds " Jul 19 11:51:47.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b-258ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-486' Jul 19 11:51:47.903: INFO: stderr: "" Jul 19 11:51:47.903: INFO: stdout: "true" Jul 19 11:51:47.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b-258ds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-486' Jul 19 11:51:47.994: INFO: stderr: "" Jul 19 11:51:47.994: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jul 19 11:51:47.994: INFO: e2e-test-httpd-rc-d9341f3d2bbef681552f264e5ac8011b-258ds is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jul 19 11:51:47.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-486' Jul 19 11:51:48.106: INFO: stderr: "" Jul 19 11:51:48.106: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:51:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-486" for this suite. • [SLOW TEST:22.020 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":96,"skipped":1468,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:51:48.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 19 11:51:56.437: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:51:56.466: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:51:58.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:51:58.471: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:52:00.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:52:00.471: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:52:02.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:52:02.471: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:52:04.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:52:04.471: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:52:06.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:52:06.470: INFO: Pod pod-with-poststart-exec-hook still exists Jul 19 11:52:08.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 19 11:52:08.471: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:52:08.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9588" for this suite. • [SLOW TEST:20.349 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:52:08.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:52:09.115: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:52:11.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:52:13.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756329, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:52:16.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:52:26.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1329" for this suite. STEP: Destroying namespace "webhook-1329-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":98,"skipped":1532,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:52:26.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6490 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6490 STEP: creating replication controller externalsvc in namespace services-6490 I0719 11:52:26.668184 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6490, replica count: 2 I0719 11:52:29.718584 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:52:32.718821 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 19 11:52:32.774: INFO: Creating new exec pod Jul 19 11:52:39.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6490 execpodwzr7k -- /bin/sh -x -c nslookup clusterip-service' Jul 19 11:52:39.317: INFO: stderr: "I0719 11:52:39.237195 1755 log.go:172] (0xc000916e70) (0xc0008fc5a0) Create stream\nI0719 11:52:39.237327 1755 log.go:172] (0xc000916e70) (0xc0008fc5a0) Stream added, broadcasting: 1\nI0719 11:52:39.239565 1755 log.go:172] (0xc000916e70) Reply frame received for 1\nI0719 11:52:39.239595 1755 log.go:172] (0xc000916e70) (0xc0006325a0) Create stream\nI0719 11:52:39.239604 1755 log.go:172] (0xc000916e70) (0xc0006325a0) Stream added, broadcasting: 3\nI0719 11:52:39.240530 1755 log.go:172] (0xc000916e70) Reply frame received for 3\nI0719 11:52:39.240577 1755 log.go:172] (0xc000916e70) (0xc0009a83c0) Create stream\nI0719 11:52:39.240590 1755 log.go:172] (0xc000916e70) (0xc0009a83c0) Stream added, broadcasting: 5\nI0719 11:52:39.241560 1755 log.go:172] (0xc000916e70) Reply frame received for 5\nI0719 11:52:39.301364 1755 log.go:172] (0xc000916e70) Data frame received for 5\nI0719 11:52:39.301406 1755 log.go:172] (0xc0009a83c0) (5) Data frame handling\nI0719 11:52:39.301436 1755 log.go:172] (0xc0009a83c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0719 11:52:39.308986 1755 log.go:172] (0xc000916e70) Data frame received for 3\nI0719 11:52:39.309037 1755 log.go:172] (0xc0006325a0) (3) Data frame handling\nI0719 11:52:39.309074 1755 log.go:172] (0xc0006325a0) (3) Data frame sent\nI0719 11:52:39.310052 1755 log.go:172] (0xc000916e70) Data frame received for 3\nI0719 11:52:39.310084 1755 log.go:172] (0xc0006325a0) (3) Data frame handling\nI0719 11:52:39.310111 1755 log.go:172] (0xc0006325a0) (3) Data frame sent\nI0719 11:52:39.310580 1755 log.go:172] (0xc000916e70) Data frame received for 3\nI0719 11:52:39.310688 1755 log.go:172] (0xc0006325a0) (3) Data frame handling\nI0719 11:52:39.310812 1755 log.go:172] (0xc000916e70) Data frame received for 5\nI0719 11:52:39.310830 1755 log.go:172] (0xc0009a83c0) (5) Data frame handling\nI0719 11:52:39.313034 1755 log.go:172] (0xc000916e70) Data frame received for 1\nI0719 11:52:39.313063 1755 log.go:172] (0xc0008fc5a0) (1) Data frame handling\nI0719 11:52:39.313081 1755 log.go:172] (0xc0008fc5a0) (1) Data frame sent\nI0719 11:52:39.313105 1755 log.go:172] (0xc000916e70) (0xc0008fc5a0) Stream removed, broadcasting: 1\nI0719 11:52:39.313138 1755 log.go:172] (0xc000916e70) Go away received\nI0719 11:52:39.313518 1755 log.go:172] (0xc000916e70) (0xc0008fc5a0) Stream removed, broadcasting: 1\nI0719 11:52:39.313545 1755 log.go:172] (0xc000916e70) (0xc0006325a0) Stream removed, broadcasting: 3\nI0719 11:52:39.313561 1755 log.go:172] (0xc000916e70) (0xc0009a83c0) Stream removed, broadcasting: 5\n" Jul 19 11:52:39.317: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6490.svc.cluster.local\tcanonical name = externalsvc.services-6490.svc.cluster.local.\nName:\texternalsvc.services-6490.svc.cluster.local\nAddress: 10.99.16.61\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6490, will wait for the garbage collector to delete the pods Jul 19 11:52:39.376: INFO: Deleting ReplicationController externalsvc took: 5.896175ms Jul 19 11:52:39.876: INFO: Terminating ReplicationController externalsvc pods took: 500.268218ms Jul 19 11:52:47.459: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:52:47.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6490" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.993 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":99,"skipped":1534,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:52:47.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 19 11:52:53.111: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:52:53.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4588" for this suite. • [SLOW TEST:5.970 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1545,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:52:53.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 11:52:54.337: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 19 11:52:54.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:54.363: INFO: Number of nodes with available pods: 0 Jul 19 11:52:54.363: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:52:55.504: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:55.542: INFO: Number of nodes with available pods: 0 Jul 19 11:52:55.542: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:52:56.369: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:56.372: INFO: Number of nodes with available pods: 0 Jul 19 11:52:56.372: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:52:57.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:57.371: INFO: Number of nodes with available pods: 0 Jul 19 11:52:57.371: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:52:58.414: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:58.418: INFO: Number of nodes with available pods: 0 Jul 19 11:52:58.418: INFO: Node jerma-worker is running more than one daemon pod Jul 19 11:52:59.366: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:52:59.370: INFO: Number of nodes with available pods: 2 Jul 19 11:52:59.370: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 19 11:52:59.621: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:52:59.621: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:52:59.919: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:00.924: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:00.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:00.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:01.924: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:01.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:01.927: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:02.924: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:02.924: INFO: Pod daemon-set-29zk4 is not available Jul 19 11:53:02.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:02.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:03.923: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:03.923: INFO: Pod daemon-set-29zk4 is not available Jul 19 11:53:03.923: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:03.926: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:04.924: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:04.924: INFO: Pod daemon-set-29zk4 is not available Jul 19 11:53:04.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:04.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:05.923: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:05.923: INFO: Pod daemon-set-29zk4 is not available Jul 19 11:53:05.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:05.927: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:06.923: INFO: Wrong image for pod: daemon-set-29zk4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:06.924: INFO: Pod daemon-set-29zk4 is not available Jul 19 11:53:06.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:06.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:07.924: INFO: Pod daemon-set-nd9cg is not available Jul 19 11:53:07.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:07.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:08.924: INFO: Pod daemon-set-nd9cg is not available Jul 19 11:53:08.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:08.929: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:09.970: INFO: Pod daemon-set-nd9cg is not available Jul 19 11:53:09.970: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:09.973: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:10.924: INFO: Pod daemon-set-nd9cg is not available Jul 19 11:53:10.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:10.929: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:11.924: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:11.928: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:12.923: INFO: Wrong image for pod: daemon-set-rppjq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 19 11:53:12.923: INFO: Pod daemon-set-rppjq is not available Jul 19 11:53:12.927: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:13.925: INFO: Pod daemon-set-95n74 is not available Jul 19 11:53:13.930: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 19 11:53:13.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:13.939: INFO: Number of nodes with available pods: 1 Jul 19 11:53:13.939: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:53:14.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:14.948: INFO: Number of nodes with available pods: 1 Jul 19 11:53:14.948: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:53:15.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:15.948: INFO: Number of nodes with available pods: 1 Jul 19 11:53:15.948: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:53:16.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:16.947: INFO: Number of nodes with available pods: 1 Jul 19 11:53:16.947: INFO: Node jerma-worker2 is running more than one daemon pod Jul 19 11:53:17.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 11:53:17.947: INFO: Number of nodes with available pods: 2 Jul 19 11:53:17.948: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-230, will wait for the garbage collector to delete the pods Jul 19 11:53:18.021: INFO: Deleting DaemonSet.extensions daemon-set took: 6.496524ms Jul 19 11:53:18.321: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247296ms Jul 19 11:53:28.510: INFO: Number of nodes with available pods: 0 Jul 19 11:53:28.510: INFO: Number of running nodes: 0, number of available pods: 0 Jul 19 11:53:28.512: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-230/daemonsets","resourceVersion":"2414538"},"items":null} Jul 19 11:53:28.815: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-230/pods","resourceVersion":"2414539"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:53:29.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-230" for this suite. • [SLOW TEST:35.747 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":101,"skipped":1551,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:53:29.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-1681feca-f3bd-448e-a52d-bdc5647aecb6 in namespace container-probe-5299 Jul 19 11:53:40.981: INFO: Started pod liveness-1681feca-f3bd-448e-a52d-bdc5647aecb6 in namespace container-probe-5299 STEP: checking the pod's current state and verifying that restartCount is present Jul 19 11:53:40.985: INFO: Initial restart count of pod liveness-1681feca-f3bd-448e-a52d-bdc5647aecb6 is 0 Jul 19 11:54:03.306: INFO: Restart count of pod container-probe-5299/liveness-1681feca-f3bd-448e-a52d-bdc5647aecb6 is now 1 (22.320645179s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:54:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5299" for this suite. • [SLOW TEST:34.373 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1566,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:54:03.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:54:03.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104" in namespace "projected-1646" to be "success or failure" Jul 19 11:54:04.011: INFO: Pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104": Phase="Pending", Reason="", readiness=false. Elapsed: 54.483372ms Jul 19 11:54:06.015: INFO: Pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058199463s Jul 19 11:54:08.019: INFO: Pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104": Phase="Running", Reason="", readiness=true. Elapsed: 4.061990706s Jul 19 11:54:10.023: INFO: Pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065988917s STEP: Saw pod success Jul 19 11:54:10.023: INFO: Pod "downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104" satisfied condition "success or failure" Jul 19 11:54:10.026: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104 container client-container: STEP: delete the pod Jul 19 11:54:10.072: INFO: Waiting for pod downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104 to disappear Jul 19 11:54:10.076: INFO: Pod downwardapi-volume-5e393132-af8c-44ff-95d7-0eff178dc104 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:54:10.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1646" for this suite. • [SLOW TEST:6.509 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:54:10.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 11:54:16.199: INFO: DNS probes using dns-test-d733d87b-2462-4df2-b8f8-8a03ddafcc8b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 11:54:22.317: INFO: File wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:22.320: INFO: File jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:22.320: INFO: Lookups using dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb failed for: [wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local] Jul 19 11:54:27.325: INFO: File wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:27.328: INFO: File jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:27.328: INFO: Lookups using dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb failed for: [wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local] Jul 19 11:54:32.325: INFO: File wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:32.329: INFO: File jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:32.329: INFO: Lookups using dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb failed for: [wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local] Jul 19 11:54:37.325: INFO: File wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:37.327: INFO: File jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:37.327: INFO: Lookups using dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb failed for: [wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local] Jul 19 11:54:42.326: INFO: File wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:42.329: INFO: File jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local from pod dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 19 11:54:42.329: INFO: Lookups using dns-3861/dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb failed for: [wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local] Jul 19 11:54:47.328: INFO: DNS probes using dns-test-7d3171b5-5ff1-42f7-a417-072433234ccb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3861.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3861.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 11:54:56.302: INFO: DNS probes using dns-test-aa910330-5544-465d-9c66-b4b0f5362558 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:54:56.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3861" for this suite. • [SLOW TEST:46.760 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":104,"skipped":1615,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:54:56.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 19 11:54:57.718: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 19 11:55:00.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:55:02.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 19 11:55:04.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756497, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 11:55:07.517: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:55:07.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1093" for this suite. STEP: Destroying namespace "webhook-1093-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.261 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":105,"skipped":1624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:55:09.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-6ab9f8a1-3b91-4c24-9ede-2e50ba899712 STEP: Creating a pod to test consume secrets Jul 19 11:55:09.691: INFO: Waiting up to 5m0s for pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec" in namespace "secrets-8341" to be "success or failure" Jul 19 11:55:09.948: INFO: Pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec": Phase="Pending", Reason="", readiness=false. Elapsed: 256.8224ms Jul 19 11:55:11.952: INFO: Pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260952999s Jul 19 11:55:13.978: INFO: Pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286599614s Jul 19 11:55:15.982: INFO: Pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.290480976s STEP: Saw pod success Jul 19 11:55:15.982: INFO: Pod "pod-secrets-07b39a28-e265-44a3-adda-b028900176ec" satisfied condition "success or failure" Jul 19 11:55:15.985: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-07b39a28-e265-44a3-adda-b028900176ec container secret-volume-test: STEP: delete the pod Jul 19 11:55:16.142: INFO: Waiting for pod pod-secrets-07b39a28-e265-44a3-adda-b028900176ec to disappear Jul 19 11:55:16.162: INFO: Pod pod-secrets-07b39a28-e265-44a3-adda-b028900176ec no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:55:16.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8341" for this suite. • [SLOW TEST:7.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:55:16.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 19 11:55:16.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233" in namespace "downward-api-6456" to be "success or failure" Jul 19 11:55:16.648: INFO: Pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233": Phase="Pending", Reason="", readiness=false. Elapsed: 46.702828ms Jul 19 11:55:18.651: INFO: Pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050046463s Jul 19 11:55:20.678: INFO: Pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077202859s Jul 19 11:55:22.682: INFO: Pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081183737s STEP: Saw pod success Jul 19 11:55:22.682: INFO: Pod "downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233" satisfied condition "success or failure" Jul 19 11:55:22.685: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233 container client-container: STEP: delete the pod Jul 19 11:55:22.733: INFO: Waiting for pod downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233 to disappear Jul 19 11:55:22.737: INFO: Pod downwardapi-volume-211a85fb-d29b-461e-9337-9ffd46d0e233 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:55:22.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6456" for this suite. • [SLOW TEST:6.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:55:22.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7775 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7775 STEP: creating replication controller externalsvc in namespace services-7775 I0719 11:55:23.465008 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7775, replica count: 2 I0719 11:55:26.515511 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 11:55:29.515778 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 19 11:55:29.861: INFO: Creating new exec pod Jul 19 11:55:34.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7775 execpodlq6v6 -- /bin/sh -x -c nslookup nodeport-service' Jul 19 11:55:34.257: INFO: stderr: "I0719 11:55:34.155653 1775 log.go:172] (0xc00097e8f0) (0xc0009f2000) Create stream\nI0719 11:55:34.155711 1775 log.go:172] (0xc00097e8f0) (0xc0009f2000) Stream added, broadcasting: 1\nI0719 11:55:34.158772 1775 log.go:172] (0xc00097e8f0) Reply frame received for 1\nI0719 11:55:34.158827 1775 log.go:172] (0xc00097e8f0) (0xc000a38000) Create stream\nI0719 11:55:34.158841 1775 log.go:172] (0xc00097e8f0) (0xc000a38000) Stream added, broadcasting: 3\nI0719 11:55:34.160178 1775 log.go:172] (0xc00097e8f0) Reply frame received for 3\nI0719 11:55:34.160217 1775 log.go:172] (0xc00097e8f0) (0xc0009f20a0) Create stream\nI0719 11:55:34.160229 1775 log.go:172] (0xc00097e8f0) (0xc0009f20a0) Stream added, broadcasting: 5\nI0719 11:55:34.161538 1775 log.go:172] (0xc00097e8f0) Reply frame received for 5\nI0719 11:55:34.241060 1775 log.go:172] (0xc00097e8f0) Data frame received for 5\nI0719 11:55:34.241092 1775 log.go:172] (0xc0009f20a0) (5) Data frame handling\nI0719 11:55:34.241113 1775 log.go:172] (0xc0009f20a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0719 11:55:34.248605 1775 log.go:172] (0xc00097e8f0) Data frame received for 3\nI0719 11:55:34.248654 1775 log.go:172] (0xc000a38000) (3) Data frame handling\nI0719 11:55:34.248694 1775 log.go:172] (0xc000a38000) (3) Data frame sent\nI0719 11:55:34.250049 1775 log.go:172] (0xc00097e8f0) Data frame received for 3\nI0719 11:55:34.250083 1775 log.go:172] (0xc000a38000) (3) Data frame handling\nI0719 11:55:34.250114 1775 log.go:172] (0xc000a38000) (3) Data frame sent\nI0719 11:55:34.250395 1775 log.go:172] (0xc00097e8f0) Data frame received for 3\nI0719 11:55:34.250432 1775 log.go:172] (0xc000a38000) (3) Data frame handling\nI0719 11:55:34.250543 1775 log.go:172] (0xc00097e8f0) Data frame received for 5\nI0719 11:55:34.250584 1775 log.go:172] (0xc0009f20a0) (5) Data frame handling\nI0719 11:55:34.252550 1775 log.go:172] (0xc00097e8f0) Data frame received for 1\nI0719 11:55:34.252587 1775 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0719 11:55:34.252609 1775 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0719 11:55:34.252632 1775 log.go:172] (0xc00097e8f0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0719 11:55:34.252661 1775 log.go:172] (0xc00097e8f0) Go away received\nI0719 11:55:34.253237 1775 log.go:172] (0xc00097e8f0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0719 11:55:34.253267 1775 log.go:172] (0xc00097e8f0) (0xc000a38000) Stream removed, broadcasting: 3\nI0719 11:55:34.253310 1775 log.go:172] (0xc00097e8f0) (0xc0009f20a0) Stream removed, broadcasting: 5\n" Jul 19 11:55:34.258: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7775.svc.cluster.local\tcanonical name = externalsvc.services-7775.svc.cluster.local.\nName:\texternalsvc.services-7775.svc.cluster.local\nAddress: 10.106.49.18\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7775, will wait for the garbage collector to delete the pods Jul 19 11:55:34.318: INFO: Deleting ReplicationController externalsvc took: 6.787121ms Jul 19 11:55:34.719: INFO: Terminating ReplicationController externalsvc pods took: 400.237349ms Jul 19 11:55:47.577: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:55:47.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7775" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.857 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":108,"skipped":1709,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:55:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 19 11:55:47.693: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 19 11:55:52.735: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:55:52.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7698" for this suite. • [SLOW TEST:5.249 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":109,"skipped":1715,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:55:52.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jul 19 11:55:52.995: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 11:56:08.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1173" for this suite. • [SLOW TEST:15.428 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":110,"skipped":1721,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 11:56:08.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-3fc2955d-0132-4f7a-8faf-513a52cdc8d8 in namespace container-probe-3574 Jul 19 11:56:12.911: INFO: Started pod busybox-3fc2955d-0132-4f7a-8faf-513a52cdc8d8 in namespace container-probe-3574 STEP: checking the pod's current state and verifying that restartCount is present Jul 19 11:56:12.914: INFO: Initial restart count of pod busybox-3fc2955d-0132-4f7a-8faf-513a52cdc8d8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:00:13.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3574" for this suite. • [SLOW TEST:245.626 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1723,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:00:13.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:00:20.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-478" for this suite. • [SLOW TEST:7.072 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":112,"skipped":1737,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:00:20.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-477c217c-af7b-44d1-a05a-358b1bab0576 STEP: Creating configMap with name cm-test-opt-upd-ac7d71b6-47e6-40cc-8ed7-0b6557dcc0c5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-477c217c-af7b-44d1-a05a-358b1bab0576 STEP: Updating configmap cm-test-opt-upd-ac7d71b6-47e6-40cc-8ed7-0b6557dcc0c5 STEP: Creating configMap with name cm-test-opt-create-c34c3d12-b2d7-45a2-b27e-2b3c51d0c4d0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:01:57.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1926" for this suite. • [SLOW TEST:96.095 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1740,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:01:57.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-db1aab4d-4c3d-4909-9464-233e00448504 STEP: Creating a pod to test consume secrets Jul 19 12:01:57.183: INFO: Waiting up to 5m0s for pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409" in namespace "secrets-7664" to be "success or failure" Jul 19 12:01:57.187: INFO: Pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938802ms Jul 19 12:01:59.191: INFO: Pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007931333s Jul 19 12:02:01.200: INFO: Pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409": Phase="Running", Reason="", readiness=true. Elapsed: 4.016883023s Jul 19 12:02:03.218: INFO: Pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034796347s STEP: Saw pod success Jul 19 12:02:03.218: INFO: Pod "pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409" satisfied condition "success or failure" Jul 19 12:02:03.221: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409 container secret-volume-test: STEP: delete the pod Jul 19 12:02:03.489: INFO: Waiting for pod pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409 to disappear Jul 19 12:02:03.493: INFO: Pod pod-secrets-9e631a51-5fea-407e-aea1-665b0a3ca409 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:03.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7664" for this suite. • [SLOW TEST:6.430 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:03.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 12:02:03.794: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 19 12:02:06.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6668 create -f -' Jul 19 12:02:11.206: INFO: stderr: "" Jul 19 12:02:11.206: INFO: stdout: "e2e-test-crd-publish-openapi-4918-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 19 12:02:11.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6668 delete e2e-test-crd-publish-openapi-4918-crds test-cr' Jul 19 12:02:11.307: INFO: stderr: "" Jul 19 12:02:11.308: INFO: stdout: "e2e-test-crd-publish-openapi-4918-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 19 12:02:11.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6668 apply -f -' Jul 19 12:02:11.568: INFO: stderr: "" Jul 19 12:02:11.569: INFO: stdout: "e2e-test-crd-publish-openapi-4918-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 19 12:02:11.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6668 delete e2e-test-crd-publish-openapi-4918-crds test-cr' Jul 19 12:02:11.653: INFO: stderr: "" Jul 19 12:02:11.653: INFO: stdout: "e2e-test-crd-publish-openapi-4918-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 19 12:02:11.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4918-crds' Jul 19 12:02:11.906: INFO: stderr: "" Jul 19 12:02:11.906: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4918-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:13.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6668" for this suite. • [SLOW TEST:10.301 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":115,"skipped":1805,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:13.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3349 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3349 I0719 12:02:14.134703 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3349, replica count: 2 I0719 12:02:17.185243 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 12:02:20.185397 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 12:02:23.185638 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 12:02:26.185862 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 19 12:02:26.185: INFO: Creating new exec pod Jul 19 12:02:31.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3349 execpodt8bt6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 19 12:02:31.547: INFO: stderr: "I0719 12:02:31.431344 1913 log.go:172] (0xc000119290) (0xc0007aa0a0) Create stream\nI0719 12:02:31.431414 1913 log.go:172] (0xc000119290) (0xc0007aa0a0) Stream added, broadcasting: 1\nI0719 12:02:31.439255 1913 log.go:172] (0xc000119290) Reply frame received for 1\nI0719 12:02:31.439301 1913 log.go:172] (0xc000119290) (0xc000611c20) Create stream\nI0719 12:02:31.439314 1913 log.go:172] (0xc000119290) (0xc000611c20) Stream added, broadcasting: 3\nI0719 12:02:31.441600 1913 log.go:172] (0xc000119290) Reply frame received for 3\nI0719 12:02:31.441635 1913 log.go:172] (0xc000119290) (0xc00017e000) Create stream\nI0719 12:02:31.441651 1913 log.go:172] (0xc000119290) (0xc00017e000) Stream added, broadcasting: 5\nI0719 12:02:31.442441 1913 log.go:172] (0xc000119290) Reply frame received for 5\nI0719 12:02:31.540433 1913 log.go:172] (0xc000119290) Data frame received for 5\nI0719 12:02:31.540465 1913 log.go:172] (0xc00017e000) (5) Data frame handling\nI0719 12:02:31.540496 1913 log.go:172] (0xc00017e000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0719 12:02:31.540844 1913 log.go:172] (0xc000119290) Data frame received for 5\nI0719 12:02:31.540877 1913 log.go:172] (0xc00017e000) (5) Data frame handling\nI0719 12:02:31.540898 1913 log.go:172] (0xc00017e000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0719 12:02:31.541142 1913 log.go:172] (0xc000119290) Data frame received for 5\nI0719 12:02:31.541170 1913 log.go:172] (0xc00017e000) (5) Data frame handling\nI0719 12:02:31.541378 1913 log.go:172] (0xc000119290) Data frame received for 3\nI0719 12:02:31.541395 1913 log.go:172] (0xc000611c20) (3) Data frame handling\nI0719 12:02:31.542874 1913 log.go:172] (0xc000119290) Data frame received for 1\nI0719 12:02:31.542902 1913 log.go:172] (0xc0007aa0a0) (1) Data frame handling\nI0719 12:02:31.542919 1913 log.go:172] (0xc0007aa0a0) (1) Data frame sent\nI0719 12:02:31.542940 1913 log.go:172] (0xc000119290) (0xc0007aa0a0) Stream removed, broadcasting: 1\nI0719 12:02:31.542979 1913 log.go:172] (0xc000119290) Go away received\nI0719 12:02:31.543233 1913 log.go:172] (0xc000119290) (0xc0007aa0a0) Stream removed, broadcasting: 1\nI0719 12:02:31.543253 1913 log.go:172] (0xc000119290) (0xc000611c20) Stream removed, broadcasting: 3\nI0719 12:02:31.543264 1913 log.go:172] (0xc000119290) (0xc00017e000) Stream removed, broadcasting: 5\n" Jul 19 12:02:31.547: INFO: stdout: "" Jul 19 12:02:31.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3349 execpodt8bt6 -- /bin/sh -x -c nc -zv -t -w 2 10.101.170.112 80' Jul 19 12:02:31.749: INFO: stderr: "I0719 12:02:31.675945 1937 log.go:172] (0xc0000169a0) (0xc0008ac000) Create stream\nI0719 12:02:31.675991 1937 log.go:172] (0xc0000169a0) (0xc0008ac000) Stream added, broadcasting: 1\nI0719 12:02:31.678328 1937 log.go:172] (0xc0000169a0) Reply frame received for 1\nI0719 12:02:31.678373 1937 log.go:172] (0xc0000169a0) (0xc000609ae0) Create stream\nI0719 12:02:31.678387 1937 log.go:172] (0xc0000169a0) (0xc000609ae0) Stream added, broadcasting: 3\nI0719 12:02:31.679432 1937 log.go:172] (0xc0000169a0) Reply frame received for 3\nI0719 12:02:31.679507 1937 log.go:172] (0xc0000169a0) (0xc0008ac140) Create stream\nI0719 12:02:31.679537 1937 log.go:172] (0xc0000169a0) (0xc0008ac140) Stream added, broadcasting: 5\nI0719 12:02:31.680847 1937 log.go:172] (0xc0000169a0) Reply frame received for 5\nI0719 12:02:31.742853 1937 log.go:172] (0xc0000169a0) Data frame received for 3\nI0719 12:02:31.742882 1937 log.go:172] (0xc000609ae0) (3) Data frame handling\nI0719 12:02:31.742935 1937 log.go:172] (0xc0000169a0) Data frame received for 5\nI0719 12:02:31.742968 1937 log.go:172] (0xc0008ac140) (5) Data frame handling\nI0719 12:02:31.742988 1937 log.go:172] (0xc0008ac140) (5) Data frame sent\nI0719 12:02:31.743004 1937 log.go:172] (0xc0000169a0) Data frame received for 5\nI0719 12:02:31.743013 1937 log.go:172] (0xc0008ac140) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.170.112 80\nConnection to 10.101.170.112 80 port [tcp/http] succeeded!\nI0719 12:02:31.744405 1937 log.go:172] (0xc0000169a0) Data frame received for 1\nI0719 12:02:31.744431 1937 log.go:172] (0xc0008ac000) (1) Data frame handling\nI0719 12:02:31.744454 1937 log.go:172] (0xc0008ac000) (1) Data frame sent\nI0719 12:02:31.744475 1937 log.go:172] (0xc0000169a0) (0xc0008ac000) Stream removed, broadcasting: 1\nI0719 12:02:31.744501 1937 log.go:172] (0xc0000169a0) Go away received\nI0719 12:02:31.745061 1937 log.go:172] (0xc0000169a0) (0xc0008ac000) Stream removed, broadcasting: 1\nI0719 12:02:31.745083 1937 log.go:172] (0xc0000169a0) (0xc000609ae0) Stream removed, broadcasting: 3\nI0719 12:02:31.745092 1937 log.go:172] (0xc0000169a0) (0xc0008ac140) Stream removed, broadcasting: 5\n" Jul 19 12:02:31.749: INFO: stdout: "" Jul 19 12:02:31.749: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3349" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.025 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":116,"skipped":1805,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:31.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8841ede9-73a1-4e7e-bacc-ea7f483f0cbf STEP: Creating a pod to test consume configMaps Jul 19 12:02:31.898: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c" in namespace "projected-8948" to be "success or failure" Jul 19 12:02:31.904: INFO: Pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.529471ms Jul 19 12:02:33.930: INFO: Pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032054657s Jul 19 12:02:35.958: INFO: Pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c": Phase="Running", Reason="", readiness=true. Elapsed: 4.060112234s Jul 19 12:02:38.037: INFO: Pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138649178s STEP: Saw pod success Jul 19 12:02:38.037: INFO: Pod "pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c" satisfied condition "success or failure" Jul 19 12:02:38.054: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c container projected-configmap-volume-test: STEP: delete the pod Jul 19 12:02:38.452: INFO: Waiting for pod pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c to disappear Jul 19 12:02:38.455: INFO: Pod pod-projected-configmaps-893c3698-27fe-47ec-bfcb-45b0f050bb8c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:38.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8948" for this suite. • [SLOW TEST:6.631 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1807,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:38.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 19 12:02:38.760: INFO: Waiting up to 5m0s for pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed" in namespace "downward-api-808" to be "success or failure" Jul 19 12:02:38.955: INFO: Pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 195.508834ms Jul 19 12:02:40.960: INFO: Pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199937465s Jul 19 12:02:42.963: INFO: Pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed": Phase="Running", Reason="", readiness=true. Elapsed: 4.203247368s Jul 19 12:02:44.967: INFO: Pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207308753s STEP: Saw pod success Jul 19 12:02:44.967: INFO: Pod "downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed" satisfied condition "success or failure" Jul 19 12:02:44.971: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed container dapi-container: STEP: delete the pod Jul 19 12:02:45.040: INFO: Waiting for pod downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed to disappear Jul 19 12:02:45.045: INFO: Pod downward-api-5e6bc5b1-f037-4fa8-88da-da33699aa7ed no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:45.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-808" for this suite. • [SLOW TEST:6.588 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:45.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-89eb70df-4e5f-4d9d-9fa4-c1f3b313a829 STEP: Creating a pod to test consume secrets Jul 19 12:02:45.256: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2" in namespace "projected-6614" to be "success or failure" Jul 19 12:02:45.261: INFO: Pod "pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.792104ms Jul 19 12:02:47.267: INFO: Pod "pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011030784s Jul 19 12:02:49.272: INFO: Pod "pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015640113s STEP: Saw pod success Jul 19 12:02:49.272: INFO: Pod "pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2" satisfied condition "success or failure" Jul 19 12:02:49.275: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2 container secret-volume-test: STEP: delete the pod Jul 19 12:02:49.317: INFO: Waiting for pod pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2 to disappear Jul 19 12:02:49.333: INFO: Pod pod-projected-secrets-9506d1e2-ae0a-4b04-b73e-329f32b3e6d2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:02:49.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6614" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1848,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:02:49.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 19 12:02:59.508: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 19 12:02:59.522: INFO: Pod pod-with-prestop-exec-hook still exists Jul 19 12:03:01.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 19 12:03:01.528: INFO: Pod pod-with-prestop-exec-hook still exists Jul 19 12:03:03.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 19 12:03:03.527: INFO: Pod pod-with-prestop-exec-hook still exists Jul 19 12:03:05.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 19 12:03:05.527: INFO: Pod pod-with-prestop-exec-hook still exists Jul 19 12:03:07.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 19 12:03:07.527: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:03:07.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3831" for this suite. • [SLOW TEST:18.199 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1849,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:03:07.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 19 12:03:07.998: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 19 12:03:10.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756988, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756988, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730756987, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 19 12:03:13.082: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 12:03:13.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:03:15.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8741" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.682 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":121,"skipped":1867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:03:15.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 19 12:03:15.284: INFO: Waiting up to 5m0s for pod "downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b" in namespace "downward-api-5071" to be "success or failure" Jul 19 12:03:15.288: INFO: Pod "downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.71945ms Jul 19 12:03:17.316: INFO: Pod "downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031692233s Jul 19 12:03:19.320: INFO: Pod "downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035237288s STEP: Saw pod success Jul 19 12:03:19.320: INFO: Pod "downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b" satisfied condition "success or failure" Jul 19 12:03:19.322: INFO: Trying to get logs from node jerma-worker2 pod downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b container dapi-container: STEP: delete the pod Jul 19 12:03:19.370: INFO: Waiting for pod downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b to disappear Jul 19 12:03:19.375: INFO: Pod downward-api-183fda94-b7b6-4122-8bdd-00a8eced575b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:03:19.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5071" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1916,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:03:19.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 19 12:03:23.487: INFO: &Pod{ObjectMeta:{send-events-3219ec97-fd9d-44c7-a843-475991025b59 events-1751 /api/v1/namespaces/events-1751/pods/send-events-3219ec97-fd9d-44c7-a843-475991025b59 40f1ca17-6b99-481b-89bf-0a47d25ef1f4 2417087 0 2020-07-19 12:03:19 +0000 UTC map[name:foo time:467238642] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pfhtr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pfhtr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pfhtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:03:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:03:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.210,StartTime:2020-07-19 12:03:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 12:03:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://80a556f6b1b6cb4fbc32c47d1cbca70a59151160e69e9279d84a6132c9ee546d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 19 12:03:25.491: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 19 12:03:27.495: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:03:27.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1751" for this suite. • [SLOW TEST:8.165 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":123,"skipped":1925,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:03:27.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8789 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jul 19 12:03:27.723: INFO: Found 0 stateful pods, waiting for 3 Jul 19 12:03:37.726: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:03:37.726: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:03:37.726: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 19 12:03:47.726: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:03:47.726: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:03:47.726: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:03:47.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8789 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:03:47.996: INFO: stderr: "I0719 12:03:47.857782 1960 log.go:172] (0xc000bb40b0) (0xc00070da40) Create stream\nI0719 12:03:47.857853 1960 log.go:172] (0xc000bb40b0) (0xc00070da40) Stream added, broadcasting: 1\nI0719 12:03:47.860200 1960 log.go:172] (0xc000bb40b0) Reply frame received for 1\nI0719 12:03:47.860227 1960 log.go:172] (0xc000bb40b0) (0xc00093c000) Create stream\nI0719 12:03:47.860233 1960 log.go:172] (0xc000bb40b0) (0xc00093c000) Stream added, broadcasting: 3\nI0719 12:03:47.861221 1960 log.go:172] (0xc000bb40b0) Reply frame received for 3\nI0719 12:03:47.861279 1960 log.go:172] (0xc000bb40b0) (0xc000648000) Create stream\nI0719 12:03:47.861300 1960 log.go:172] (0xc000bb40b0) (0xc000648000) Stream added, broadcasting: 5\nI0719 12:03:47.862097 1960 log.go:172] (0xc000bb40b0) Reply frame received for 5\nI0719 12:03:47.923166 1960 log.go:172] (0xc000bb40b0) Data frame received for 5\nI0719 12:03:47.923195 1960 log.go:172] (0xc000648000) (5) Data frame handling\nI0719 12:03:47.923216 1960 log.go:172] (0xc000648000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:03:47.990624 1960 log.go:172] (0xc000bb40b0) Data frame received for 3\nI0719 12:03:47.990677 1960 log.go:172] (0xc00093c000) (3) Data frame handling\nI0719 12:03:47.990698 1960 log.go:172] (0xc00093c000) (3) Data frame sent\nI0719 12:03:47.990721 1960 log.go:172] (0xc000bb40b0) Data frame received for 3\nI0719 12:03:47.990737 1960 log.go:172] (0xc00093c000) (3) Data frame handling\nI0719 12:03:47.990777 1960 log.go:172] (0xc000bb40b0) Data frame received for 5\nI0719 12:03:47.990809 1960 log.go:172] (0xc000648000) (5) Data frame handling\nI0719 12:03:47.991900 1960 log.go:172] (0xc000bb40b0) Data frame received for 1\nI0719 12:03:47.991923 1960 log.go:172] (0xc00070da40) (1) Data frame handling\nI0719 12:03:47.991941 1960 log.go:172] (0xc00070da40) (1) Data frame sent\nI0719 12:03:47.991989 1960 log.go:172] (0xc000bb40b0) (0xc00070da40) Stream removed, broadcasting: 1\nI0719 12:03:47.992057 1960 log.go:172] (0xc000bb40b0) Go away received\nI0719 12:03:47.992550 1960 log.go:172] (0xc000bb40b0) (0xc00070da40) Stream removed, broadcasting: 1\nI0719 12:03:47.992568 1960 log.go:172] (0xc000bb40b0) (0xc00093c000) Stream removed, broadcasting: 3\nI0719 12:03:47.992581 1960 log.go:172] (0xc000bb40b0) (0xc000648000) Stream removed, broadcasting: 5\n" Jul 19 12:03:47.996: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:03:47.996: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 19 12:03:58.128: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 19 12:04:08.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8789 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 19 12:04:08.383: INFO: stderr: "I0719 12:04:08.323129 1982 log.go:172] (0xc00051edc0) (0xc00064c000) Create stream\nI0719 12:04:08.323166 1982 log.go:172] (0xc00051edc0) (0xc00064c000) Stream added, broadcasting: 1\nI0719 12:04:08.324933 1982 log.go:172] (0xc00051edc0) Reply frame received for 1\nI0719 12:04:08.324982 1982 log.go:172] (0xc00051edc0) (0xc000695b80) Create stream\nI0719 12:04:08.325004 1982 log.go:172] (0xc00051edc0) (0xc000695b80) Stream added, broadcasting: 3\nI0719 12:04:08.325677 1982 log.go:172] (0xc00051edc0) Reply frame received for 3\nI0719 12:04:08.325714 1982 log.go:172] (0xc00051edc0) (0xc000695d60) Create stream\nI0719 12:04:08.325734 1982 log.go:172] (0xc00051edc0) (0xc000695d60) Stream added, broadcasting: 5\nI0719 12:04:08.326596 1982 log.go:172] (0xc00051edc0) Reply frame received for 5\nI0719 12:04:08.378686 1982 log.go:172] (0xc00051edc0) Data frame received for 3\nI0719 12:04:08.378724 1982 log.go:172] (0xc000695b80) (3) Data frame handling\nI0719 12:04:08.378739 1982 log.go:172] (0xc000695b80) (3) Data frame sent\nI0719 12:04:08.378844 1982 log.go:172] (0xc00051edc0) Data frame received for 3\nI0719 12:04:08.378871 1982 log.go:172] (0xc000695b80) (3) Data frame handling\nI0719 12:04:08.378891 1982 log.go:172] (0xc00051edc0) Data frame received for 5\nI0719 12:04:08.378912 1982 log.go:172] (0xc000695d60) (5) Data frame handling\nI0719 12:04:08.378925 1982 log.go:172] (0xc000695d60) (5) Data frame sent\nI0719 12:04:08.378937 1982 log.go:172] (0xc00051edc0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:04:08.378943 1982 log.go:172] (0xc000695d60) (5) Data frame handling\nI0719 12:04:08.379719 1982 log.go:172] (0xc00051edc0) Data frame received for 1\nI0719 12:04:08.379735 1982 log.go:172] (0xc00064c000) (1) Data frame handling\nI0719 12:04:08.379746 1982 log.go:172] (0xc00064c000) (1) Data frame sent\nI0719 12:04:08.379828 1982 log.go:172] (0xc00051edc0) (0xc00064c000) Stream removed, broadcasting: 1\nI0719 12:04:08.379858 1982 log.go:172] (0xc00051edc0) Go away received\nI0719 12:04:08.380235 1982 log.go:172] (0xc00051edc0) (0xc00064c000) Stream removed, broadcasting: 1\nI0719 12:04:08.380256 1982 log.go:172] (0xc00051edc0) (0xc000695b80) Stream removed, broadcasting: 3\nI0719 12:04:08.380266 1982 log.go:172] (0xc00051edc0) (0xc000695d60) Stream removed, broadcasting: 5\n" Jul 19 12:04:08.383: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 19 12:04:08.383: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 19 12:04:38.403: INFO: Waiting for StatefulSet statefulset-8789/ss2 to complete update Jul 19 12:04:38.403: INFO: Waiting for Pod statefulset-8789/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 19 12:04:48.897: INFO: Waiting for StatefulSet statefulset-8789/ss2 to complete update STEP: Rolling back to a previous revision Jul 19 12:04:58.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8789 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:04:59.001: INFO: stderr: "I0719 12:04:58.587952 2004 log.go:172] (0xc0009aa630) (0xc000bdc000) Create stream\nI0719 12:04:58.588002 2004 log.go:172] (0xc0009aa630) (0xc000bdc000) Stream added, broadcasting: 1\nI0719 12:04:58.590117 2004 log.go:172] (0xc0009aa630) Reply frame received for 1\nI0719 12:04:58.590161 2004 log.go:172] (0xc0009aa630) (0xc000bdc0a0) Create stream\nI0719 12:04:58.590172 2004 log.go:172] (0xc0009aa630) (0xc000bdc0a0) Stream added, broadcasting: 3\nI0719 12:04:58.591144 2004 log.go:172] (0xc0009aa630) Reply frame received for 3\nI0719 12:04:58.591188 2004 log.go:172] (0xc0009aa630) (0xc000713ae0) Create stream\nI0719 12:04:58.591202 2004 log.go:172] (0xc0009aa630) (0xc000713ae0) Stream added, broadcasting: 5\nI0719 12:04:58.592253 2004 log.go:172] (0xc0009aa630) Reply frame received for 5\nI0719 12:04:58.641177 2004 log.go:172] (0xc0009aa630) Data frame received for 5\nI0719 12:04:58.641201 2004 log.go:172] (0xc000713ae0) (5) Data frame handling\nI0719 12:04:58.641215 2004 log.go:172] (0xc000713ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:04:58.992885 2004 log.go:172] (0xc0009aa630) Data frame received for 3\nI0719 12:04:58.992940 2004 log.go:172] (0xc000bdc0a0) (3) Data frame handling\nI0719 12:04:58.992973 2004 log.go:172] (0xc000bdc0a0) (3) Data frame sent\nI0719 12:04:58.992992 2004 log.go:172] (0xc0009aa630) Data frame received for 3\nI0719 12:04:58.993008 2004 log.go:172] (0xc000bdc0a0) (3) Data frame handling\nI0719 12:04:58.993218 2004 log.go:172] (0xc0009aa630) Data frame received for 5\nI0719 12:04:58.993247 2004 log.go:172] (0xc000713ae0) (5) Data frame handling\nI0719 12:04:58.995661 2004 log.go:172] (0xc0009aa630) Data frame received for 1\nI0719 12:04:58.995704 2004 log.go:172] (0xc000bdc000) (1) Data frame handling\nI0719 12:04:58.995739 2004 log.go:172] (0xc000bdc000) (1) Data frame sent\nI0719 12:04:58.995823 2004 log.go:172] (0xc0009aa630) (0xc000bdc000) Stream removed, broadcasting: 1\nI0719 12:04:58.996029 2004 log.go:172] (0xc0009aa630) Go away received\nI0719 12:04:58.996263 2004 log.go:172] (0xc0009aa630) (0xc000bdc000) Stream removed, broadcasting: 1\nI0719 12:04:58.996290 2004 log.go:172] (0xc0009aa630) (0xc000bdc0a0) Stream removed, broadcasting: 3\nI0719 12:04:58.996305 2004 log.go:172] (0xc0009aa630) (0xc000713ae0) Stream removed, broadcasting: 5\n" Jul 19 12:04:59.001: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:04:59.001: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 19 12:05:09.030: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 19 12:05:19.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8789 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 19 12:05:19.490: INFO: stderr: "I0719 12:05:19.422508 2024 log.go:172] (0xc000ac74a0) (0xc000aa6500) Create stream\nI0719 12:05:19.422550 2024 log.go:172] (0xc000ac74a0) (0xc000aa6500) Stream added, broadcasting: 1\nI0719 12:05:19.424237 2024 log.go:172] (0xc000ac74a0) Reply frame received for 1\nI0719 12:05:19.424264 2024 log.go:172] (0xc000ac74a0) (0xc000a0c000) Create stream\nI0719 12:05:19.424275 2024 log.go:172] (0xc000ac74a0) (0xc000a0c000) Stream added, broadcasting: 3\nI0719 12:05:19.425309 2024 log.go:172] (0xc000ac74a0) Reply frame received for 3\nI0719 12:05:19.425351 2024 log.go:172] (0xc000ac74a0) (0xc000aa65a0) Create stream\nI0719 12:05:19.425366 2024 log.go:172] (0xc000ac74a0) (0xc000aa65a0) Stream added, broadcasting: 5\nI0719 12:05:19.426024 2024 log.go:172] (0xc000ac74a0) Reply frame received for 5\nI0719 12:05:19.483356 2024 log.go:172] (0xc000ac74a0) Data frame received for 5\nI0719 12:05:19.483402 2024 log.go:172] (0xc000aa65a0) (5) Data frame handling\nI0719 12:05:19.483430 2024 log.go:172] (0xc000aa65a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:05:19.483466 2024 log.go:172] (0xc000ac74a0) Data frame received for 3\nI0719 12:05:19.483478 2024 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0719 12:05:19.483493 2024 log.go:172] (0xc000a0c000) (3) Data frame sent\nI0719 12:05:19.483505 2024 log.go:172] (0xc000ac74a0) Data frame received for 3\nI0719 12:05:19.483519 2024 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0719 12:05:19.483689 2024 log.go:172] (0xc000ac74a0) Data frame received for 5\nI0719 12:05:19.483701 2024 log.go:172] (0xc000aa65a0) (5) Data frame handling\nI0719 12:05:19.485045 2024 log.go:172] (0xc000ac74a0) Data frame received for 1\nI0719 12:05:19.485069 2024 log.go:172] (0xc000aa6500) (1) Data frame handling\nI0719 12:05:19.485096 2024 log.go:172] (0xc000aa6500) (1) Data frame sent\nI0719 12:05:19.485132 2024 log.go:172] (0xc000ac74a0) (0xc000aa6500) Stream removed, broadcasting: 1\nI0719 12:05:19.485227 2024 log.go:172] (0xc000ac74a0) Go away received\nI0719 12:05:19.485602 2024 log.go:172] (0xc000ac74a0) (0xc000aa6500) Stream removed, broadcasting: 1\nI0719 12:05:19.485624 2024 log.go:172] (0xc000ac74a0) (0xc000a0c000) Stream removed, broadcasting: 3\nI0719 12:05:19.485634 2024 log.go:172] (0xc000ac74a0) (0xc000aa65a0) Stream removed, broadcasting: 5\n" Jul 19 12:05:19.490: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 19 12:05:19.490: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 19 12:05:29.951: INFO: Waiting for StatefulSet statefulset-8789/ss2 to complete update Jul 19 12:05:29.951: INFO: Waiting for Pod statefulset-8789/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 19 12:05:29.951: INFO: Waiting for Pod statefulset-8789/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 19 12:05:39.958: INFO: Waiting for StatefulSet statefulset-8789/ss2 to complete update Jul 19 12:05:39.958: INFO: Waiting for Pod statefulset-8789/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 19 12:05:50.395: INFO: Waiting for StatefulSet statefulset-8789/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 19 12:05:59.960: INFO: Deleting all statefulset in ns statefulset-8789 Jul 19 12:05:59.963: INFO: Scaling statefulset ss2 to 0 Jul 19 12:06:30.000: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 12:06:30.006: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:06:30.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8789" for this suite. • [SLOW TEST:182.476 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":124,"skipped":1926,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:06:30.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:06:30.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3477" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":125,"skipped":1943,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:06:30.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 19 12:06:37.225: INFO: Successfully updated pod "adopt-release-7b62d" STEP: Checking that the Job readopts the Pod Jul 19 12:06:37.225: INFO: Waiting up to 15m0s for pod "adopt-release-7b62d" in namespace "job-8518" to be "adopted" Jul 19 12:06:37.426: INFO: Pod "adopt-release-7b62d": Phase="Running", Reason="", readiness=true. Elapsed: 201.121129ms Jul 19 12:06:39.431: INFO: Pod "adopt-release-7b62d": Phase="Running", Reason="", readiness=true. Elapsed: 2.206243857s Jul 19 12:06:39.431: INFO: Pod "adopt-release-7b62d" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 19 12:06:40.097: INFO: Successfully updated pod "adopt-release-7b62d" STEP: Checking that the Job releases the Pod Jul 19 12:06:40.097: INFO: Waiting up to 15m0s for pod "adopt-release-7b62d" in namespace "job-8518" to be "released" Jul 19 12:06:40.485: INFO: Pod "adopt-release-7b62d": Phase="Running", Reason="", readiness=true. Elapsed: 387.239428ms Jul 19 12:06:43.061: INFO: Pod "adopt-release-7b62d": Phase="Running", Reason="", readiness=true. Elapsed: 2.963517346s Jul 19 12:06:43.061: INFO: Pod "adopt-release-7b62d" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:06:43.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8518" for this suite. • [SLOW TEST:13.364 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":126,"skipped":1945,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:06:43.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 19 12:06:49.218: INFO: Pod name wrapped-volume-race-551a084d-a43c-4aa5-a559-82023baa0424: Found 0 pods out of 5 Jul 19 12:06:54.401: INFO: Pod name wrapped-volume-race-551a084d-a43c-4aa5-a559-82023baa0424: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-551a084d-a43c-4aa5-a559-82023baa0424 in namespace emptydir-wrapper-5876, will wait for the garbage collector to delete the pods Jul 19 12:07:10.812: INFO: Deleting ReplicationController wrapped-volume-race-551a084d-a43c-4aa5-a559-82023baa0424 took: 27.550783ms Jul 19 12:07:11.212: INFO: Terminating ReplicationController wrapped-volume-race-551a084d-a43c-4aa5-a559-82023baa0424 pods took: 400.39008ms STEP: Creating RC which spawns configmap-volume pods Jul 19 12:07:27.957: INFO: Pod name wrapped-volume-race-fd426ac8-827a-4386-9d6d-2ee382e28519: Found 0 pods out of 5 Jul 19 12:07:33.825: INFO: Pod name wrapped-volume-race-fd426ac8-827a-4386-9d6d-2ee382e28519: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fd426ac8-827a-4386-9d6d-2ee382e28519 in namespace emptydir-wrapper-5876, will wait for the garbage collector to delete the pods Jul 19 12:07:51.166: INFO: Deleting ReplicationController wrapped-volume-race-fd426ac8-827a-4386-9d6d-2ee382e28519 took: 226.183541ms Jul 19 12:07:51.966: INFO: Terminating ReplicationController wrapped-volume-race-fd426ac8-827a-4386-9d6d-2ee382e28519 pods took: 800.262029ms STEP: Creating RC which spawns configmap-volume pods Jul 19 12:08:08.163: INFO: Pod name wrapped-volume-race-dbbd649b-a083-4e6b-b766-a8206a4b4173: Found 0 pods out of 5 Jul 19 12:08:13.185: INFO: Pod name wrapped-volume-race-dbbd649b-a083-4e6b-b766-a8206a4b4173: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dbbd649b-a083-4e6b-b766-a8206a4b4173 in namespace emptydir-wrapper-5876, will wait for the garbage collector to delete the pods Jul 19 12:08:27.593: INFO: Deleting ReplicationController wrapped-volume-race-dbbd649b-a083-4e6b-b766-a8206a4b4173 took: 5.301166ms Jul 19 12:08:28.094: INFO: Terminating ReplicationController wrapped-volume-race-dbbd649b-a083-4e6b-b766-a8206a4b4173 pods took: 500.297866ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:08:40.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5876" for this suite. • [SLOW TEST:117.422 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":127,"skipped":1947,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:08:40.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 19 12:08:41.009: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7479 /api/v1/namespaces/watch-7479/configmaps/e2e-watch-test-watch-closed 6ac176db-5b40-4fa5-b869-2f195bcb6047 2419080 0 2020-07-19 12:08:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 19 12:08:41.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7479 /api/v1/namespaces/watch-7479/configmaps/e2e-watch-test-watch-closed 6ac176db-5b40-4fa5-b869-2f195bcb6047 2419081 0 2020-07-19 12:08:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 19 12:08:41.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7479 /api/v1/namespaces/watch-7479/configmaps/e2e-watch-test-watch-closed 6ac176db-5b40-4fa5-b869-2f195bcb6047 2419082 0 2020-07-19 12:08:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 19 12:08:41.020: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7479 /api/v1/namespaces/watch-7479/configmaps/e2e-watch-test-watch-closed 6ac176db-5b40-4fa5-b869-2f195bcb6047 2419083 0 2020-07-19 12:08:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:08:41.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7479" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":128,"skipped":1967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:08:41.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 19 12:08:41.135: INFO: PodSpec: initContainers in spec.initContainers Jul 19 12:09:34.026: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a9a1fee6-22a0-4941-9332-765f6ac2a93c", GenerateName:"", Namespace:"init-container-370", SelfLink:"/api/v1/namespaces/init-container-370/pods/pod-init-a9a1fee6-22a0-4941-9332-765f6ac2a93c", UID:"afbf0f80-53ce-41f5-8157-a984fe779da3", ResourceVersion:"2419455", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730757321, loc:(*time.Location)(0x78f7140)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"135969289"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8csmj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00387a140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8csmj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8csmj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8csmj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004561b88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004218420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004561c10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004561c30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004561c38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004561c3c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730757321, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730757321, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730757321, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730757321, loc:(*time.Location)(0x78f7140)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.11", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.11"}}, StartTime:(*v1.Time)(0xc0037201c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003720200), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b438f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6d36a179fdfbabbb83e13b9b678855be6ea9c9da9f564012c1435b99432db697", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003720220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037201e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004561cbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:09:34.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-370" for this suite. • [SLOW TEST:53.097 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":129,"skipped":2007,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:09:34.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 19 12:09:34.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2039' Jul 19 12:09:34.390: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 19 12:09:34.390: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jul 19 12:09:34.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-2039' Jul 19 12:09:34.528: INFO: stderr: "" Jul 19 12:09:34.528: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:09:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2039" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":130,"skipped":2018,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:09:34.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jul 19 12:09:35.171: INFO: created pod pod-service-account-defaultsa Jul 19 12:09:35.171: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 19 12:09:35.191: INFO: created pod pod-service-account-mountsa Jul 19 12:09:35.191: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 19 12:09:35.200: INFO: created pod pod-service-account-nomountsa Jul 19 12:09:35.200: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 19 12:09:35.240: INFO: created pod pod-service-account-defaultsa-mountspec Jul 19 12:09:35.240: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 19 12:09:35.273: INFO: created pod pod-service-account-mountsa-mountspec Jul 19 12:09:35.273: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 19 12:09:35.289: INFO: created pod pod-service-account-nomountsa-mountspec Jul 19 12:09:35.289: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 19 12:09:35.333: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 19 12:09:35.333: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 19 12:09:35.515: INFO: created pod pod-service-account-mountsa-nomountspec Jul 19 12:09:35.515: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 19 12:09:35.544: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 19 12:09:35.544: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:09:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5154" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":131,"skipped":2031,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:09:35.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 12:09:36.040: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec" in namespace "security-context-test-9545" to be "success or failure" Jul 19 12:09:36.101: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 60.93425ms Jul 19 12:09:38.213: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172731841s Jul 19 12:09:40.369: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328651979s Jul 19 12:09:42.782: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.741648379s Jul 19 12:09:44.903: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863073002s Jul 19 12:09:47.046: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Pending", Reason="", readiness=false. Elapsed: 11.005415153s Jul 19 12:09:49.435: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Running", Reason="", readiness=true. Elapsed: 13.394659371s Jul 19 12:09:51.818: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.777634242s Jul 19 12:09:51.818: INFO: Pod "busybox-readonly-false-f17e1f69-9f2d-4e89-81f2-ea355c5f94ec" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:09:51.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9545" for this suite. • [SLOW TEST:16.365 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:09:52.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0719 12:10:34.081186 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 19 12:10:34.081: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:10:34.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5989" for this suite. • [SLOW TEST:42.032 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":133,"skipped":2074,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:10:34.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 19 12:10:53.257: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.257: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.294778 6 log.go:172] (0xc0029d44d0) (0xc001b47a40) Create stream I0719 12:10:53.294809 6 log.go:172] (0xc0029d44d0) (0xc001b47a40) Stream added, broadcasting: 1 I0719 12:10:53.297113 6 log.go:172] (0xc0029d44d0) Reply frame received for 1 I0719 12:10:53.297157 6 log.go:172] (0xc0029d44d0) (0xc00172c1e0) Create stream I0719 12:10:53.297167 6 log.go:172] (0xc0029d44d0) (0xc00172c1e0) Stream added, broadcasting: 3 I0719 12:10:53.297985 6 log.go:172] (0xc0029d44d0) Reply frame received for 3 I0719 12:10:53.298026 6 log.go:172] (0xc0029d44d0) (0xc00172c3c0) Create stream I0719 12:10:53.298036 6 log.go:172] (0xc0029d44d0) (0xc00172c3c0) Stream added, broadcasting: 5 I0719 12:10:53.298743 6 log.go:172] (0xc0029d44d0) Reply frame received for 5 I0719 12:10:53.381468 6 log.go:172] (0xc0029d44d0) Data frame received for 5 I0719 12:10:53.381507 6 log.go:172] (0xc00172c3c0) (5) Data frame handling I0719 12:10:53.381534 6 log.go:172] (0xc0029d44d0) Data frame received for 3 I0719 12:10:53.381563 6 log.go:172] (0xc00172c1e0) (3) Data frame handling I0719 12:10:53.381579 6 log.go:172] (0xc00172c1e0) (3) Data frame sent I0719 12:10:53.381591 6 log.go:172] (0xc0029d44d0) Data frame received for 3 I0719 12:10:53.381618 6 log.go:172] (0xc00172c1e0) (3) Data frame handling I0719 12:10:53.383263 6 log.go:172] (0xc0029d44d0) Data frame received for 1 I0719 12:10:53.383388 6 log.go:172] (0xc001b47a40) (1) Data frame handling I0719 12:10:53.383446 6 log.go:172] (0xc001b47a40) (1) Data frame sent I0719 12:10:53.383479 6 log.go:172] (0xc0029d44d0) (0xc001b47a40) Stream removed, broadcasting: 1 I0719 12:10:53.383541 6 log.go:172] (0xc0029d44d0) Go away received I0719 12:10:53.383916 6 log.go:172] (0xc0029d44d0) (0xc001b47a40) Stream removed, broadcasting: 1 I0719 12:10:53.383971 6 log.go:172] (0xc0029d44d0) (0xc00172c1e0) Stream removed, broadcasting: 3 I0719 12:10:53.383999 6 log.go:172] (0xc0029d44d0) (0xc00172c3c0) Stream removed, broadcasting: 5 Jul 19 12:10:53.384: INFO: Exec stderr: "" Jul 19 12:10:53.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.384: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.412482 6 log.go:172] (0xc002d94420) (0xc001fb90e0) Create stream I0719 12:10:53.412508 6 log.go:172] (0xc002d94420) (0xc001fb90e0) Stream added, broadcasting: 1 I0719 12:10:53.414577 6 log.go:172] (0xc002d94420) Reply frame received for 1 I0719 12:10:53.414620 6 log.go:172] (0xc002d94420) (0xc00142e000) Create stream I0719 12:10:53.414639 6 log.go:172] (0xc002d94420) (0xc00142e000) Stream added, broadcasting: 3 I0719 12:10:53.415423 6 log.go:172] (0xc002d94420) Reply frame received for 3 I0719 12:10:53.415449 6 log.go:172] (0xc002d94420) (0xc001b47ae0) Create stream I0719 12:10:53.415458 6 log.go:172] (0xc002d94420) (0xc001b47ae0) Stream added, broadcasting: 5 I0719 12:10:53.416093 6 log.go:172] (0xc002d94420) Reply frame received for 5 I0719 12:10:53.477489 6 log.go:172] (0xc002d94420) Data frame received for 3 I0719 12:10:53.477520 6 log.go:172] (0xc00142e000) (3) Data frame handling I0719 12:10:53.477552 6 log.go:172] (0xc00142e000) (3) Data frame sent I0719 12:10:53.477727 6 log.go:172] (0xc002d94420) Data frame received for 3 I0719 12:10:53.477756 6 log.go:172] (0xc00142e000) (3) Data frame handling I0719 12:10:53.478200 6 log.go:172] (0xc002d94420) Data frame received for 5 I0719 12:10:53.478221 6 log.go:172] (0xc001b47ae0) (5) Data frame handling I0719 12:10:53.480406 6 log.go:172] (0xc002d94420) Data frame received for 1 I0719 12:10:53.480424 6 log.go:172] (0xc001fb90e0) (1) Data frame handling I0719 12:10:53.480448 6 log.go:172] (0xc001fb90e0) (1) Data frame sent I0719 12:10:53.480460 6 log.go:172] (0xc002d94420) (0xc001fb90e0) Stream removed, broadcasting: 1 I0719 12:10:53.480534 6 log.go:172] (0xc002d94420) (0xc001fb90e0) Stream removed, broadcasting: 1 I0719 12:10:53.480571 6 log.go:172] (0xc002d94420) (0xc00142e000) Stream removed, broadcasting: 3 I0719 12:10:53.480600 6 log.go:172] (0xc002d94420) (0xc001b47ae0) Stream removed, broadcasting: 5 Jul 19 12:10:53.480: INFO: Exec stderr: "" Jul 19 12:10:53.480: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.480: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.480701 6 log.go:172] (0xc002d94420) Go away received I0719 12:10:53.505983 6 log.go:172] (0xc00343a420) (0xc00142e5a0) Create stream I0719 12:10:53.506009 6 log.go:172] (0xc00343a420) (0xc00142e5a0) Stream added, broadcasting: 1 I0719 12:10:53.508066 6 log.go:172] (0xc00343a420) Reply frame received for 1 I0719 12:10:53.508096 6 log.go:172] (0xc00343a420) (0xc001fb9720) Create stream I0719 12:10:53.508107 6 log.go:172] (0xc00343a420) (0xc001fb9720) Stream added, broadcasting: 3 I0719 12:10:53.508991 6 log.go:172] (0xc00343a420) Reply frame received for 3 I0719 12:10:53.509025 6 log.go:172] (0xc00343a420) (0xc00172c460) Create stream I0719 12:10:53.509036 6 log.go:172] (0xc00343a420) (0xc00172c460) Stream added, broadcasting: 5 I0719 12:10:53.509831 6 log.go:172] (0xc00343a420) Reply frame received for 5 I0719 12:10:53.576695 6 log.go:172] (0xc00343a420) Data frame received for 5 I0719 12:10:53.576797 6 log.go:172] (0xc00172c460) (5) Data frame handling I0719 12:10:53.576877 6 log.go:172] (0xc00343a420) Data frame received for 3 I0719 12:10:53.576895 6 log.go:172] (0xc001fb9720) (3) Data frame handling I0719 12:10:53.576911 6 log.go:172] (0xc001fb9720) (3) Data frame sent I0719 12:10:53.576920 6 log.go:172] (0xc00343a420) Data frame received for 3 I0719 12:10:53.576930 6 log.go:172] (0xc001fb9720) (3) Data frame handling I0719 12:10:53.578854 6 log.go:172] (0xc00343a420) Data frame received for 1 I0719 12:10:53.578872 6 log.go:172] (0xc00142e5a0) (1) Data frame handling I0719 12:10:53.578880 6 log.go:172] (0xc00142e5a0) (1) Data frame sent I0719 12:10:53.578890 6 log.go:172] (0xc00343a420) (0xc00142e5a0) Stream removed, broadcasting: 1 I0719 12:10:53.578902 6 log.go:172] (0xc00343a420) Go away received I0719 12:10:53.579061 6 log.go:172] (0xc00343a420) (0xc00142e5a0) Stream removed, broadcasting: 1 I0719 12:10:53.579099 6 log.go:172] (0xc00343a420) (0xc001fb9720) Stream removed, broadcasting: 3 I0719 12:10:53.579114 6 log.go:172] (0xc00343a420) (0xc00172c460) Stream removed, broadcasting: 5 Jul 19 12:10:53.579: INFO: Exec stderr: "" Jul 19 12:10:53.579: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.579: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.685869 6 log.go:172] (0xc000b5c370) (0xc00172c960) Create stream I0719 12:10:53.685900 6 log.go:172] (0xc000b5c370) (0xc00172c960) Stream added, broadcasting: 1 I0719 12:10:53.688127 6 log.go:172] (0xc000b5c370) Reply frame received for 1 I0719 12:10:53.688170 6 log.go:172] (0xc000b5c370) (0xc00142e6e0) Create stream I0719 12:10:53.688180 6 log.go:172] (0xc000b5c370) (0xc00142e6e0) Stream added, broadcasting: 3 I0719 12:10:53.689230 6 log.go:172] (0xc000b5c370) Reply frame received for 3 I0719 12:10:53.689263 6 log.go:172] (0xc000b5c370) (0xc00142e8c0) Create stream I0719 12:10:53.689275 6 log.go:172] (0xc000b5c370) (0xc00142e8c0) Stream added, broadcasting: 5 I0719 12:10:53.690114 6 log.go:172] (0xc000b5c370) Reply frame received for 5 I0719 12:10:53.754424 6 log.go:172] (0xc000b5c370) Data frame received for 5 I0719 12:10:53.754462 6 log.go:172] (0xc00142e8c0) (5) Data frame handling I0719 12:10:53.754510 6 log.go:172] (0xc000b5c370) Data frame received for 3 I0719 12:10:53.754553 6 log.go:172] (0xc00142e6e0) (3) Data frame handling I0719 12:10:53.754576 6 log.go:172] (0xc00142e6e0) (3) Data frame sent I0719 12:10:53.754594 6 log.go:172] (0xc000b5c370) Data frame received for 3 I0719 12:10:53.754602 6 log.go:172] (0xc00142e6e0) (3) Data frame handling I0719 12:10:53.755997 6 log.go:172] (0xc000b5c370) Data frame received for 1 I0719 12:10:53.756019 6 log.go:172] (0xc00172c960) (1) Data frame handling I0719 12:10:53.756047 6 log.go:172] (0xc00172c960) (1) Data frame sent I0719 12:10:53.756072 6 log.go:172] (0xc000b5c370) (0xc00172c960) Stream removed, broadcasting: 1 I0719 12:10:53.756098 6 log.go:172] (0xc000b5c370) Go away received I0719 12:10:53.756180 6 log.go:172] (0xc000b5c370) (0xc00172c960) Stream removed, broadcasting: 1 I0719 12:10:53.756205 6 log.go:172] (0xc000b5c370) (0xc00142e6e0) Stream removed, broadcasting: 3 I0719 12:10:53.756223 6 log.go:172] (0xc000b5c370) (0xc00142e8c0) Stream removed, broadcasting: 5 Jul 19 12:10:53.756: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 19 12:10:53.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.756: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.782352 6 log.go:172] (0xc002d94a50) (0xc00239a280) Create stream I0719 12:10:53.782380 6 log.go:172] (0xc002d94a50) (0xc00239a280) Stream added, broadcasting: 1 I0719 12:10:53.784634 6 log.go:172] (0xc002d94a50) Reply frame received for 1 I0719 12:10:53.784685 6 log.go:172] (0xc002d94a50) (0xc0018740a0) Create stream I0719 12:10:53.784701 6 log.go:172] (0xc002d94a50) (0xc0018740a0) Stream added, broadcasting: 3 I0719 12:10:53.785948 6 log.go:172] (0xc002d94a50) Reply frame received for 3 I0719 12:10:53.785976 6 log.go:172] (0xc002d94a50) (0xc00172ca00) Create stream I0719 12:10:53.785990 6 log.go:172] (0xc002d94a50) (0xc00172ca00) Stream added, broadcasting: 5 I0719 12:10:53.786901 6 log.go:172] (0xc002d94a50) Reply frame received for 5 I0719 12:10:53.845622 6 log.go:172] (0xc002d94a50) Data frame received for 5 I0719 12:10:53.845717 6 log.go:172] (0xc00172ca00) (5) Data frame handling I0719 12:10:53.845756 6 log.go:172] (0xc002d94a50) Data frame received for 3 I0719 12:10:53.845772 6 log.go:172] (0xc0018740a0) (3) Data frame handling I0719 12:10:53.845795 6 log.go:172] (0xc0018740a0) (3) Data frame sent I0719 12:10:53.845809 6 log.go:172] (0xc002d94a50) Data frame received for 3 I0719 12:10:53.845821 6 log.go:172] (0xc0018740a0) (3) Data frame handling I0719 12:10:53.846780 6 log.go:172] (0xc002d94a50) Data frame received for 1 I0719 12:10:53.846801 6 log.go:172] (0xc00239a280) (1) Data frame handling I0719 12:10:53.846813 6 log.go:172] (0xc00239a280) (1) Data frame sent I0719 12:10:53.846828 6 log.go:172] (0xc002d94a50) (0xc00239a280) Stream removed, broadcasting: 1 I0719 12:10:53.846852 6 log.go:172] (0xc002d94a50) Go away received I0719 12:10:53.846937 6 log.go:172] (0xc002d94a50) (0xc00239a280) Stream removed, broadcasting: 1 I0719 12:10:53.846972 6 log.go:172] (0xc002d94a50) (0xc0018740a0) Stream removed, broadcasting: 3 I0719 12:10:53.846995 6 log.go:172] (0xc002d94a50) (0xc00172ca00) Stream removed, broadcasting: 5 Jul 19 12:10:53.847: INFO: Exec stderr: "" Jul 19 12:10:53.847: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.847: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.873970 6 log.go:172] (0xc0029d4c60) (0xc001b47f40) Create stream I0719 12:10:53.874012 6 log.go:172] (0xc0029d4c60) (0xc001b47f40) Stream added, broadcasting: 1 I0719 12:10:53.876322 6 log.go:172] (0xc0029d4c60) Reply frame received for 1 I0719 12:10:53.876363 6 log.go:172] (0xc0029d4c60) (0xc00142e960) Create stream I0719 12:10:53.876374 6 log.go:172] (0xc0029d4c60) (0xc00142e960) Stream added, broadcasting: 3 I0719 12:10:53.877187 6 log.go:172] (0xc0029d4c60) Reply frame received for 3 I0719 12:10:53.877230 6 log.go:172] (0xc0029d4c60) (0xc00172cc80) Create stream I0719 12:10:53.877242 6 log.go:172] (0xc0029d4c60) (0xc00172cc80) Stream added, broadcasting: 5 I0719 12:10:53.877971 6 log.go:172] (0xc0029d4c60) Reply frame received for 5 I0719 12:10:53.936826 6 log.go:172] (0xc0029d4c60) Data frame received for 3 I0719 12:10:53.936858 6 log.go:172] (0xc00142e960) (3) Data frame handling I0719 12:10:53.936866 6 log.go:172] (0xc00142e960) (3) Data frame sent I0719 12:10:53.936872 6 log.go:172] (0xc0029d4c60) Data frame received for 3 I0719 12:10:53.936876 6 log.go:172] (0xc00142e960) (3) Data frame handling I0719 12:10:53.936894 6 log.go:172] (0xc0029d4c60) Data frame received for 5 I0719 12:10:53.936901 6 log.go:172] (0xc00172cc80) (5) Data frame handling I0719 12:10:53.938476 6 log.go:172] (0xc0029d4c60) Data frame received for 1 I0719 12:10:53.938514 6 log.go:172] (0xc001b47f40) (1) Data frame handling I0719 12:10:53.938538 6 log.go:172] (0xc001b47f40) (1) Data frame sent I0719 12:10:53.938553 6 log.go:172] (0xc0029d4c60) (0xc001b47f40) Stream removed, broadcasting: 1 I0719 12:10:53.938568 6 log.go:172] (0xc0029d4c60) Go away received I0719 12:10:53.938739 6 log.go:172] (0xc0029d4c60) (0xc001b47f40) Stream removed, broadcasting: 1 I0719 12:10:53.938779 6 log.go:172] (0xc0029d4c60) (0xc00142e960) Stream removed, broadcasting: 3 I0719 12:10:53.938801 6 log.go:172] (0xc0029d4c60) (0xc00172cc80) Stream removed, broadcasting: 5 Jul 19 12:10:53.938: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 19 12:10:53.938: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:53.938: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:53.976500 6 log.go:172] (0xc0029d53f0) (0xc00281a140) Create stream I0719 12:10:53.976538 6 log.go:172] (0xc0029d53f0) (0xc00281a140) Stream added, broadcasting: 1 I0719 12:10:53.978566 6 log.go:172] (0xc0029d53f0) Reply frame received for 1 I0719 12:10:53.978605 6 log.go:172] (0xc0029d53f0) (0xc00239a3c0) Create stream I0719 12:10:53.978613 6 log.go:172] (0xc0029d53f0) (0xc00239a3c0) Stream added, broadcasting: 3 I0719 12:10:53.979462 6 log.go:172] (0xc0029d53f0) Reply frame received for 3 I0719 12:10:53.979499 6 log.go:172] (0xc0029d53f0) (0xc00142ebe0) Create stream I0719 12:10:53.979511 6 log.go:172] (0xc0029d53f0) (0xc00142ebe0) Stream added, broadcasting: 5 I0719 12:10:53.980178 6 log.go:172] (0xc0029d53f0) Reply frame received for 5 I0719 12:10:54.027876 6 log.go:172] (0xc0029d53f0) Data frame received for 3 I0719 12:10:54.027902 6 log.go:172] (0xc00239a3c0) (3) Data frame handling I0719 12:10:54.027915 6 log.go:172] (0xc00239a3c0) (3) Data frame sent I0719 12:10:54.027920 6 log.go:172] (0xc0029d53f0) Data frame received for 3 I0719 12:10:54.027924 6 log.go:172] (0xc00239a3c0) (3) Data frame handling I0719 12:10:54.028073 6 log.go:172] (0xc0029d53f0) Data frame received for 5 I0719 12:10:54.028114 6 log.go:172] (0xc00142ebe0) (5) Data frame handling I0719 12:10:54.029411 6 log.go:172] (0xc0029d53f0) Data frame received for 1 I0719 12:10:54.029427 6 log.go:172] (0xc00281a140) (1) Data frame handling I0719 12:10:54.029439 6 log.go:172] (0xc00281a140) (1) Data frame sent I0719 12:10:54.029526 6 log.go:172] (0xc0029d53f0) (0xc00281a140) Stream removed, broadcasting: 1 I0719 12:10:54.029567 6 log.go:172] (0xc0029d53f0) Go away received I0719 12:10:54.029612 6 log.go:172] (0xc0029d53f0) (0xc00281a140) Stream removed, broadcasting: 1 I0719 12:10:54.029635 6 log.go:172] (0xc0029d53f0) (0xc00239a3c0) Stream removed, broadcasting: 3 I0719 12:10:54.029650 6 log.go:172] (0xc0029d53f0) (0xc00142ebe0) Stream removed, broadcasting: 5 Jul 19 12:10:54.029: INFO: Exec stderr: "" Jul 19 12:10:54.029: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:54.029: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:54.062546 6 log.go:172] (0xc002d94fd0) (0xc00239a640) Create stream I0719 12:10:54.062572 6 log.go:172] (0xc002d94fd0) (0xc00239a640) Stream added, broadcasting: 1 I0719 12:10:54.065429 6 log.go:172] (0xc002d94fd0) Reply frame received for 1 I0719 12:10:54.065456 6 log.go:172] (0xc002d94fd0) (0xc00281a280) Create stream I0719 12:10:54.065469 6 log.go:172] (0xc002d94fd0) (0xc00281a280) Stream added, broadcasting: 3 I0719 12:10:54.066344 6 log.go:172] (0xc002d94fd0) Reply frame received for 3 I0719 12:10:54.066400 6 log.go:172] (0xc002d94fd0) (0xc00172d220) Create stream I0719 12:10:54.066419 6 log.go:172] (0xc002d94fd0) (0xc00172d220) Stream added, broadcasting: 5 I0719 12:10:54.067357 6 log.go:172] (0xc002d94fd0) Reply frame received for 5 I0719 12:10:54.129197 6 log.go:172] (0xc002d94fd0) Data frame received for 5 I0719 12:10:54.129256 6 log.go:172] (0xc00172d220) (5) Data frame handling I0719 12:10:54.129296 6 log.go:172] (0xc002d94fd0) Data frame received for 3 I0719 12:10:54.129316 6 log.go:172] (0xc00281a280) (3) Data frame handling I0719 12:10:54.129342 6 log.go:172] (0xc00281a280) (3) Data frame sent I0719 12:10:54.129362 6 log.go:172] (0xc002d94fd0) Data frame received for 3 I0719 12:10:54.129380 6 log.go:172] (0xc00281a280) (3) Data frame handling I0719 12:10:54.130412 6 log.go:172] (0xc002d94fd0) Data frame received for 1 I0719 12:10:54.130426 6 log.go:172] (0xc00239a640) (1) Data frame handling I0719 12:10:54.130442 6 log.go:172] (0xc00239a640) (1) Data frame sent I0719 12:10:54.130467 6 log.go:172] (0xc002d94fd0) (0xc00239a640) Stream removed, broadcasting: 1 I0719 12:10:54.130536 6 log.go:172] (0xc002d94fd0) Go away received I0719 12:10:54.130591 6 log.go:172] (0xc002d94fd0) (0xc00239a640) Stream removed, broadcasting: 1 I0719 12:10:54.130610 6 log.go:172] (0xc002d94fd0) (0xc00281a280) Stream removed, broadcasting: 3 I0719 12:10:54.130619 6 log.go:172] (0xc002d94fd0) (0xc00172d220) Stream removed, broadcasting: 5 Jul 19 12:10:54.130: INFO: Exec stderr: "" Jul 19 12:10:54.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:54.130: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:54.161255 6 log.go:172] (0xc002d956b0) (0xc00239a780) Create stream I0719 12:10:54.161281 6 log.go:172] (0xc002d956b0) (0xc00239a780) Stream added, broadcasting: 1 I0719 12:10:54.165588 6 log.go:172] (0xc002d956b0) Reply frame received for 1 I0719 12:10:54.165642 6 log.go:172] (0xc002d956b0) (0xc00239a960) Create stream I0719 12:10:54.165659 6 log.go:172] (0xc002d956b0) (0xc00239a960) Stream added, broadcasting: 3 I0719 12:10:54.166386 6 log.go:172] (0xc002d956b0) Reply frame received for 3 I0719 12:10:54.166409 6 log.go:172] (0xc002d956b0) (0xc00281a320) Create stream I0719 12:10:54.166421 6 log.go:172] (0xc002d956b0) (0xc00281a320) Stream added, broadcasting: 5 I0719 12:10:54.167130 6 log.go:172] (0xc002d956b0) Reply frame received for 5 I0719 12:10:54.211497 6 log.go:172] (0xc002d956b0) Data frame received for 5 I0719 12:10:54.211528 6 log.go:172] (0xc00281a320) (5) Data frame handling I0719 12:10:54.211550 6 log.go:172] (0xc002d956b0) Data frame received for 3 I0719 12:10:54.211561 6 log.go:172] (0xc00239a960) (3) Data frame handling I0719 12:10:54.211569 6 log.go:172] (0xc00239a960) (3) Data frame sent I0719 12:10:54.211578 6 log.go:172] (0xc002d956b0) Data frame received for 3 I0719 12:10:54.211587 6 log.go:172] (0xc00239a960) (3) Data frame handling I0719 12:10:54.213222 6 log.go:172] (0xc002d956b0) Data frame received for 1 I0719 12:10:54.213244 6 log.go:172] (0xc00239a780) (1) Data frame handling I0719 12:10:54.213262 6 log.go:172] (0xc00239a780) (1) Data frame sent I0719 12:10:54.213282 6 log.go:172] (0xc002d956b0) (0xc00239a780) Stream removed, broadcasting: 1 I0719 12:10:54.213356 6 log.go:172] (0xc002d956b0) Go away received I0719 12:10:54.213396 6 log.go:172] (0xc002d956b0) (0xc00239a780) Stream removed, broadcasting: 1 I0719 12:10:54.213413 6 log.go:172] (0xc002d956b0) (0xc00239a960) Stream removed, broadcasting: 3 I0719 12:10:54.213431 6 log.go:172] (0xc002d956b0) (0xc00281a320) Stream removed, broadcasting: 5 Jul 19 12:10:54.213: INFO: Exec stderr: "" Jul 19 12:10:54.213: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:10:54.213: INFO: >>> kubeConfig: /root/.kube/config I0719 12:10:54.281894 6 log.go:172] (0xc0029d5970) (0xc00281a6e0) Create stream I0719 12:10:54.281931 6 log.go:172] (0xc0029d5970) (0xc00281a6e0) Stream added, broadcasting: 1 I0719 12:10:54.284610 6 log.go:172] (0xc0029d5970) Reply frame received for 1 I0719 12:10:54.284631 6 log.go:172] (0xc0029d5970) (0xc00281a820) Create stream I0719 12:10:54.284642 6 log.go:172] (0xc0029d5970) (0xc00281a820) Stream added, broadcasting: 3 I0719 12:10:54.285795 6 log.go:172] (0xc0029d5970) Reply frame received for 3 I0719 12:10:54.285855 6 log.go:172] (0xc0029d5970) (0xc00172d4a0) Create stream I0719 12:10:54.285885 6 log.go:172] (0xc0029d5970) (0xc00172d4a0) Stream added, broadcasting: 5 I0719 12:10:54.286948 6 log.go:172] (0xc0029d5970) Reply frame received for 5 I0719 12:10:54.345003 6 log.go:172] (0xc0029d5970) Data frame received for 5 I0719 12:10:54.345036 6 log.go:172] (0xc00172d4a0) (5) Data frame handling I0719 12:10:54.345076 6 log.go:172] (0xc0029d5970) Data frame received for 3 I0719 12:10:54.345102 6 log.go:172] (0xc00281a820) (3) Data frame handling I0719 12:10:54.345125 6 log.go:172] (0xc00281a820) (3) Data frame sent I0719 12:10:54.345146 6 log.go:172] (0xc0029d5970) Data frame received for 3 I0719 12:10:54.345156 6 log.go:172] (0xc00281a820) (3) Data frame handling I0719 12:10:54.346390 6 log.go:172] (0xc0029d5970) Data frame received for 1 I0719 12:10:54.346419 6 log.go:172] (0xc00281a6e0) (1) Data frame handling I0719 12:10:54.346445 6 log.go:172] (0xc00281a6e0) (1) Data frame sent I0719 12:10:54.346464 6 log.go:172] (0xc0029d5970) (0xc00281a6e0) Stream removed, broadcasting: 1 I0719 12:10:54.346482 6 log.go:172] (0xc0029d5970) Go away received I0719 12:10:54.346614 6 log.go:172] (0xc0029d5970) (0xc00281a6e0) Stream removed, broadcasting: 1 I0719 12:10:54.346640 6 log.go:172] (0xc0029d5970) (0xc00281a820) Stream removed, broadcasting: 3 I0719 12:10:54.346648 6 log.go:172] (0xc0029d5970) (0xc00172d4a0) Stream removed, broadcasting: 5 Jul 19 12:10:54.346: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:10:54.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8932" for this suite. • [SLOW TEST:20.284 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2084,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:10:54.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6607 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-6607 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6607 Jul 19 12:10:54.896: INFO: Found 0 stateful pods, waiting for 1 Jul 19 12:11:04.900: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 19 12:11:04.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:11:05.268: INFO: stderr: "I0719 12:11:05.071435 2083 log.go:172] (0xc0005706e0) (0xc000996000) Create stream\nI0719 12:11:05.071503 2083 log.go:172] (0xc0005706e0) (0xc000996000) Stream added, broadcasting: 1\nI0719 12:11:05.074362 2083 log.go:172] (0xc0005706e0) Reply frame received for 1\nI0719 12:11:05.074416 2083 log.go:172] (0xc0005706e0) (0xc000910000) Create stream\nI0719 12:11:05.074433 2083 log.go:172] (0xc0005706e0) (0xc000910000) Stream added, broadcasting: 3\nI0719 12:11:05.075470 2083 log.go:172] (0xc0005706e0) Reply frame received for 3\nI0719 12:11:05.075515 2083 log.go:172] (0xc0005706e0) (0xc0005e7b80) Create stream\nI0719 12:11:05.075526 2083 log.go:172] (0xc0005706e0) (0xc0005e7b80) Stream added, broadcasting: 5\nI0719 12:11:05.076211 2083 log.go:172] (0xc0005706e0) Reply frame received for 5\nI0719 12:11:05.145127 2083 log.go:172] (0xc0005706e0) Data frame received for 5\nI0719 12:11:05.145156 2083 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0719 12:11:05.145166 2083 log.go:172] (0xc0005e7b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:11:05.262069 2083 log.go:172] (0xc0005706e0) Data frame received for 5\nI0719 12:11:05.262220 2083 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0719 12:11:05.262283 2083 log.go:172] (0xc0005706e0) Data frame received for 3\nI0719 12:11:05.262330 2083 log.go:172] (0xc000910000) (3) Data frame handling\nI0719 12:11:05.262356 2083 log.go:172] (0xc000910000) (3) Data frame sent\nI0719 12:11:05.262373 2083 log.go:172] (0xc0005706e0) Data frame received for 3\nI0719 12:11:05.262387 2083 log.go:172] (0xc000910000) (3) Data frame handling\nI0719 12:11:05.264373 2083 log.go:172] (0xc0005706e0) Data frame received for 1\nI0719 12:11:05.264397 2083 log.go:172] (0xc000996000) (1) Data frame handling\nI0719 12:11:05.264407 2083 log.go:172] (0xc000996000) (1) Data frame sent\nI0719 12:11:05.264432 2083 log.go:172] (0xc0005706e0) (0xc000996000) Stream removed, broadcasting: 1\nI0719 12:11:05.264466 2083 log.go:172] (0xc0005706e0) Go away received\nI0719 12:11:05.264905 2083 log.go:172] (0xc0005706e0) (0xc000996000) Stream removed, broadcasting: 1\nI0719 12:11:05.264944 2083 log.go:172] (0xc0005706e0) (0xc000910000) Stream removed, broadcasting: 3\nI0719 12:11:05.264962 2083 log.go:172] (0xc0005706e0) (0xc0005e7b80) Stream removed, broadcasting: 5\n" Jul 19 12:11:05.268: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:11:05.268: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 19 12:11:05.425: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 19 12:11:15.429: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 19 12:11:15.429: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 12:11:15.505: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:15.505: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:15.505: INFO: Jul 19 12:11:15.505: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 19 12:11:16.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996069251s Jul 19 12:11:17.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99141821s Jul 19 12:11:18.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.644929295s Jul 19 12:11:20.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.520293708s Jul 19 12:11:21.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.464348187s Jul 19 12:11:22.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.441503091s Jul 19 12:11:23.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.437599538s Jul 19 12:11:24.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.433311404s Jul 19 12:11:25.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 411.024364ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6607 Jul 19 12:11:26.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 19 12:11:26.305: INFO: stderr: "I0719 12:11:26.235497 2102 log.go:172] (0xc0009d4160) (0xc000207540) Create stream\nI0719 12:11:26.235558 2102 log.go:172] (0xc0009d4160) (0xc000207540) Stream added, broadcasting: 1\nI0719 12:11:26.237598 2102 log.go:172] (0xc0009d4160) Reply frame received for 1\nI0719 12:11:26.237643 2102 log.go:172] (0xc0009d4160) (0xc00092a000) Create stream\nI0719 12:11:26.237661 2102 log.go:172] (0xc0009d4160) (0xc00092a000) Stream added, broadcasting: 3\nI0719 12:11:26.238468 2102 log.go:172] (0xc0009d4160) Reply frame received for 3\nI0719 12:11:26.238501 2102 log.go:172] (0xc0009d4160) (0xc00092a0a0) Create stream\nI0719 12:11:26.238512 2102 log.go:172] (0xc0009d4160) (0xc00092a0a0) Stream added, broadcasting: 5\nI0719 12:11:26.239144 2102 log.go:172] (0xc0009d4160) Reply frame received for 5\nI0719 12:11:26.298543 2102 log.go:172] (0xc0009d4160) Data frame received for 3\nI0719 12:11:26.298569 2102 log.go:172] (0xc00092a000) (3) Data frame handling\nI0719 12:11:26.298612 2102 log.go:172] (0xc0009d4160) Data frame received for 5\nI0719 12:11:26.298670 2102 log.go:172] (0xc00092a0a0) (5) Data frame handling\nI0719 12:11:26.298704 2102 log.go:172] (0xc00092a0a0) (5) Data frame sent\nI0719 12:11:26.298723 2102 log.go:172] (0xc00092a000) (3) Data frame sent\nI0719 12:11:26.298750 2102 log.go:172] (0xc0009d4160) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:11:26.298769 2102 log.go:172] (0xc0009d4160) Data frame received for 5\nI0719 12:11:26.298804 2102 log.go:172] (0xc00092a0a0) (5) Data frame handling\nI0719 12:11:26.298842 2102 log.go:172] (0xc00092a000) (3) Data frame handling\nI0719 12:11:26.300318 2102 log.go:172] (0xc0009d4160) Data frame received for 1\nI0719 12:11:26.300341 2102 log.go:172] (0xc000207540) (1) Data frame handling\nI0719 12:11:26.300352 2102 log.go:172] (0xc000207540) (1) Data frame sent\nI0719 12:11:26.300373 2102 log.go:172] (0xc0009d4160) (0xc000207540) Stream removed, broadcasting: 1\nI0719 12:11:26.300396 2102 log.go:172] (0xc0009d4160) Go away received\nI0719 12:11:26.301100 2102 log.go:172] (0xc0009d4160) (0xc000207540) Stream removed, broadcasting: 1\nI0719 12:11:26.301126 2102 log.go:172] (0xc0009d4160) (0xc00092a000) Stream removed, broadcasting: 3\nI0719 12:11:26.301138 2102 log.go:172] (0xc0009d4160) (0xc00092a0a0) Stream removed, broadcasting: 5\n" Jul 19 12:11:26.305: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 19 12:11:26.305: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 19 12:11:26.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 19 12:11:26.495: INFO: stderr: "I0719 12:11:26.430099 2125 log.go:172] (0xc0000f5290) (0xc00072d900) Create stream\nI0719 12:11:26.430171 2125 log.go:172] (0xc0000f5290) (0xc00072d900) Stream added, broadcasting: 1\nI0719 12:11:26.434440 2125 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0719 12:11:26.434572 2125 log.go:172] (0xc0000f5290) (0xc0008c0000) Create stream\nI0719 12:11:26.434601 2125 log.go:172] (0xc0000f5290) (0xc0008c0000) Stream added, broadcasting: 3\nI0719 12:11:26.435674 2125 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0719 12:11:26.435720 2125 log.go:172] (0xc0000f5290) (0xc000300000) Create stream\nI0719 12:11:26.435742 2125 log.go:172] (0xc0000f5290) (0xc000300000) Stream added, broadcasting: 5\nI0719 12:11:26.436594 2125 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0719 12:11:26.489050 2125 log.go:172] (0xc0000f5290) Data frame received for 5\nI0719 12:11:26.489101 2125 log.go:172] (0xc000300000) (5) Data frame handling\nI0719 12:11:26.489119 2125 log.go:172] (0xc000300000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0719 12:11:26.489135 2125 log.go:172] (0xc0000f5290) Data frame received for 5\nI0719 12:11:26.489173 2125 log.go:172] (0xc000300000) (5) Data frame handling\nI0719 12:11:26.489200 2125 log.go:172] (0xc0000f5290) Data frame received for 3\nI0719 12:11:26.489217 2125 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0719 12:11:26.489237 2125 log.go:172] (0xc0008c0000) (3) Data frame sent\nI0719 12:11:26.489252 2125 log.go:172] (0xc0000f5290) Data frame received for 3\nI0719 12:11:26.489266 2125 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0719 12:11:26.491172 2125 log.go:172] (0xc0000f5290) Data frame received for 1\nI0719 12:11:26.491206 2125 log.go:172] (0xc00072d900) (1) Data frame handling\nI0719 12:11:26.491218 2125 log.go:172] (0xc00072d900) (1) Data frame sent\nI0719 12:11:26.491231 2125 log.go:172] (0xc0000f5290) (0xc00072d900) Stream removed, broadcasting: 1\nI0719 12:11:26.491545 2125 log.go:172] (0xc0000f5290) (0xc00072d900) Stream removed, broadcasting: 1\nI0719 12:11:26.491562 2125 log.go:172] (0xc0000f5290) (0xc0008c0000) Stream removed, broadcasting: 3\nI0719 12:11:26.491809 2125 log.go:172] (0xc0000f5290) (0xc000300000) Stream removed, broadcasting: 5\n" Jul 19 12:11:26.496: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 19 12:11:26.496: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 19 12:11:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 19 12:11:26.689: INFO: stderr: "I0719 12:11:26.618674 2146 log.go:172] (0xc000104580) (0xc0006a9ae0) Create stream\nI0719 12:11:26.618738 2146 log.go:172] (0xc000104580) (0xc0006a9ae0) Stream added, broadcasting: 1\nI0719 12:11:26.621388 2146 log.go:172] (0xc000104580) Reply frame received for 1\nI0719 12:11:26.621435 2146 log.go:172] (0xc000104580) (0xc0006a9cc0) Create stream\nI0719 12:11:26.621454 2146 log.go:172] (0xc000104580) (0xc0006a9cc0) Stream added, broadcasting: 3\nI0719 12:11:26.622493 2146 log.go:172] (0xc000104580) Reply frame received for 3\nI0719 12:11:26.622531 2146 log.go:172] (0xc000104580) (0xc0006a9d60) Create stream\nI0719 12:11:26.622545 2146 log.go:172] (0xc000104580) (0xc0006a9d60) Stream added, broadcasting: 5\nI0719 12:11:26.623550 2146 log.go:172] (0xc000104580) Reply frame received for 5\nI0719 12:11:26.682666 2146 log.go:172] (0xc000104580) Data frame received for 5\nI0719 12:11:26.682709 2146 log.go:172] (0xc0006a9d60) (5) Data frame handling\nI0719 12:11:26.682731 2146 log.go:172] (0xc0006a9d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0719 12:11:26.682751 2146 log.go:172] (0xc000104580) Data frame received for 5\nI0719 12:11:26.682770 2146 log.go:172] (0xc0006a9d60) (5) Data frame handling\nI0719 12:11:26.682795 2146 log.go:172] (0xc000104580) Data frame received for 3\nI0719 12:11:26.682808 2146 log.go:172] (0xc0006a9cc0) (3) Data frame handling\nI0719 12:11:26.682820 2146 log.go:172] (0xc0006a9cc0) (3) Data frame sent\nI0719 12:11:26.682834 2146 log.go:172] (0xc000104580) Data frame received for 3\nI0719 12:11:26.682851 2146 log.go:172] (0xc0006a9cc0) (3) Data frame handling\nI0719 12:11:26.684382 2146 log.go:172] (0xc000104580) Data frame received for 1\nI0719 12:11:26.684410 2146 log.go:172] (0xc0006a9ae0) (1) Data frame handling\nI0719 12:11:26.684437 2146 log.go:172] (0xc0006a9ae0) (1) Data frame sent\nI0719 12:11:26.684621 2146 log.go:172] (0xc000104580) (0xc0006a9ae0) Stream removed, broadcasting: 1\nI0719 12:11:26.684927 2146 log.go:172] (0xc000104580) Go away received\nI0719 12:11:26.685172 2146 log.go:172] (0xc000104580) (0xc0006a9ae0) Stream removed, broadcasting: 1\nI0719 12:11:26.685196 2146 log.go:172] (0xc000104580) (0xc0006a9cc0) Stream removed, broadcasting: 3\nI0719 12:11:26.685208 2146 log.go:172] (0xc000104580) (0xc0006a9d60) Stream removed, broadcasting: 5\n" Jul 19 12:11:26.689: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 19 12:11:26.689: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 19 12:11:26.693: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:11:26.693: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 12:11:26.693: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 19 12:11:26.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:11:26.996: INFO: stderr: "I0719 12:11:26.907301 2168 log.go:172] (0xc000b2cc60) (0xc00080de00) Create stream\nI0719 12:11:26.907359 2168 log.go:172] (0xc000b2cc60) (0xc00080de00) Stream added, broadcasting: 1\nI0719 12:11:26.914989 2168 log.go:172] (0xc000b2cc60) Reply frame received for 1\nI0719 12:11:26.915037 2168 log.go:172] (0xc000b2cc60) (0xc000b1a0a0) Create stream\nI0719 12:11:26.915047 2168 log.go:172] (0xc000b2cc60) (0xc000b1a0a0) Stream added, broadcasting: 3\nI0719 12:11:26.916225 2168 log.go:172] (0xc000b2cc60) Reply frame received for 3\nI0719 12:11:26.916273 2168 log.go:172] (0xc000b2cc60) (0xc000a38320) Create stream\nI0719 12:11:26.916292 2168 log.go:172] (0xc000b2cc60) (0xc000a38320) Stream added, broadcasting: 5\nI0719 12:11:26.917284 2168 log.go:172] (0xc000b2cc60) Reply frame received for 5\nI0719 12:11:26.990133 2168 log.go:172] (0xc000b2cc60) Data frame received for 5\nI0719 12:11:26.990178 2168 log.go:172] (0xc000a38320) (5) Data frame handling\nI0719 12:11:26.990202 2168 log.go:172] (0xc000a38320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:11:26.990222 2168 log.go:172] (0xc000b2cc60) Data frame received for 5\nI0719 12:11:26.990239 2168 log.go:172] (0xc000a38320) (5) Data frame handling\nI0719 12:11:26.990287 2168 log.go:172] (0xc000b2cc60) Data frame received for 3\nI0719 12:11:26.990323 2168 log.go:172] (0xc000b1a0a0) (3) Data frame handling\nI0719 12:11:26.990343 2168 log.go:172] (0xc000b1a0a0) (3) Data frame sent\nI0719 12:11:26.990358 2168 log.go:172] (0xc000b2cc60) Data frame received for 3\nI0719 12:11:26.990367 2168 log.go:172] (0xc000b1a0a0) (3) Data frame handling\nI0719 12:11:26.991506 2168 log.go:172] (0xc000b2cc60) Data frame received for 1\nI0719 12:11:26.991522 2168 log.go:172] (0xc00080de00) (1) Data frame handling\nI0719 12:11:26.991532 2168 log.go:172] (0xc00080de00) (1) Data frame sent\nI0719 12:11:26.991631 2168 log.go:172] (0xc000b2cc60) (0xc00080de00) Stream removed, broadcasting: 1\nI0719 12:11:26.991671 2168 log.go:172] (0xc000b2cc60) Go away received\nI0719 12:11:26.992008 2168 log.go:172] (0xc000b2cc60) (0xc00080de00) Stream removed, broadcasting: 1\nI0719 12:11:26.992029 2168 log.go:172] (0xc000b2cc60) (0xc000b1a0a0) Stream removed, broadcasting: 3\nI0719 12:11:26.992041 2168 log.go:172] (0xc000b2cc60) (0xc000a38320) Stream removed, broadcasting: 5\n" Jul 19 12:11:26.996: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:11:26.996: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 19 12:11:26.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:11:27.789: INFO: stderr: "I0719 12:11:27.120427 2187 log.go:172] (0xc0005d3130) (0xc0005fb9a0) Create stream\nI0719 12:11:27.120512 2187 log.go:172] (0xc0005d3130) (0xc0005fb9a0) Stream added, broadcasting: 1\nI0719 12:11:27.123281 2187 log.go:172] (0xc0005d3130) Reply frame received for 1\nI0719 12:11:27.123328 2187 log.go:172] (0xc0005d3130) (0xc00057a000) Create stream\nI0719 12:11:27.123341 2187 log.go:172] (0xc0005d3130) (0xc00057a000) Stream added, broadcasting: 3\nI0719 12:11:27.124361 2187 log.go:172] (0xc0005d3130) Reply frame received for 3\nI0719 12:11:27.124398 2187 log.go:172] (0xc0005d3130) (0xc00077c000) Create stream\nI0719 12:11:27.124410 2187 log.go:172] (0xc0005d3130) (0xc00077c000) Stream added, broadcasting: 5\nI0719 12:11:27.125331 2187 log.go:172] (0xc0005d3130) Reply frame received for 5\nI0719 12:11:27.188460 2187 log.go:172] (0xc0005d3130) Data frame received for 5\nI0719 12:11:27.188506 2187 log.go:172] (0xc00077c000) (5) Data frame handling\nI0719 12:11:27.188534 2187 log.go:172] (0xc00077c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:11:27.782673 2187 log.go:172] (0xc0005d3130) Data frame received for 5\nI0719 12:11:27.782735 2187 log.go:172] (0xc00077c000) (5) Data frame handling\nI0719 12:11:27.782762 2187 log.go:172] (0xc0005d3130) Data frame received for 3\nI0719 12:11:27.782774 2187 log.go:172] (0xc00057a000) (3) Data frame handling\nI0719 12:11:27.782793 2187 log.go:172] (0xc00057a000) (3) Data frame sent\nI0719 12:11:27.783079 2187 log.go:172] (0xc0005d3130) Data frame received for 3\nI0719 12:11:27.783111 2187 log.go:172] (0xc00057a000) (3) Data frame handling\nI0719 12:11:27.784031 2187 log.go:172] (0xc0005d3130) Data frame received for 1\nI0719 12:11:27.784070 2187 log.go:172] (0xc0005fb9a0) (1) Data frame handling\nI0719 12:11:27.784093 2187 log.go:172] (0xc0005fb9a0) (1) Data frame sent\nI0719 12:11:27.784115 2187 log.go:172] (0xc0005d3130) (0xc0005fb9a0) Stream removed, broadcasting: 1\nI0719 12:11:27.784137 2187 log.go:172] (0xc0005d3130) Go away received\nI0719 12:11:27.784589 2187 log.go:172] (0xc0005d3130) (0xc0005fb9a0) Stream removed, broadcasting: 1\nI0719 12:11:27.784625 2187 log.go:172] (0xc0005d3130) (0xc00057a000) Stream removed, broadcasting: 3\nI0719 12:11:27.784640 2187 log.go:172] (0xc0005d3130) (0xc00077c000) Stream removed, broadcasting: 5\n" Jul 19 12:11:27.789: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:11:27.789: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 19 12:11:27.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6607 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 19 12:11:28.421: INFO: stderr: "I0719 12:11:28.236686 2209 log.go:172] (0xc0006e6a50) (0xc0007400a0) Create stream\nI0719 12:11:28.236845 2209 log.go:172] (0xc0006e6a50) (0xc0007400a0) Stream added, broadcasting: 1\nI0719 12:11:28.239393 2209 log.go:172] (0xc0006e6a50) Reply frame received for 1\nI0719 12:11:28.239434 2209 log.go:172] (0xc0006e6a50) (0xc000740140) Create stream\nI0719 12:11:28.239446 2209 log.go:172] (0xc0006e6a50) (0xc000740140) Stream added, broadcasting: 3\nI0719 12:11:28.240390 2209 log.go:172] (0xc0006e6a50) Reply frame received for 3\nI0719 12:11:28.240421 2209 log.go:172] (0xc0006e6a50) (0xc0007401e0) Create stream\nI0719 12:11:28.240431 2209 log.go:172] (0xc0006e6a50) (0xc0007401e0) Stream added, broadcasting: 5\nI0719 12:11:28.241542 2209 log.go:172] (0xc0006e6a50) Reply frame received for 5\nI0719 12:11:28.309661 2209 log.go:172] (0xc0006e6a50) Data frame received for 5\nI0719 12:11:28.309689 2209 log.go:172] (0xc0007401e0) (5) Data frame handling\nI0719 12:11:28.309710 2209 log.go:172] (0xc0007401e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:11:28.413724 2209 log.go:172] (0xc0006e6a50) Data frame received for 3\nI0719 12:11:28.413754 2209 log.go:172] (0xc000740140) (3) Data frame handling\nI0719 12:11:28.413778 2209 log.go:172] (0xc000740140) (3) Data frame sent\nI0719 12:11:28.414012 2209 log.go:172] (0xc0006e6a50) Data frame received for 5\nI0719 12:11:28.414046 2209 log.go:172] (0xc0007401e0) (5) Data frame handling\nI0719 12:11:28.414086 2209 log.go:172] (0xc0006e6a50) Data frame received for 3\nI0719 12:11:28.414121 2209 log.go:172] (0xc000740140) (3) Data frame handling\nI0719 12:11:28.416176 2209 log.go:172] (0xc0006e6a50) Data frame received for 1\nI0719 12:11:28.416205 2209 log.go:172] (0xc0007400a0) (1) Data frame handling\nI0719 12:11:28.416232 2209 log.go:172] (0xc0007400a0) (1) Data frame sent\nI0719 12:11:28.416279 2209 log.go:172] (0xc0006e6a50) (0xc0007400a0) Stream removed, broadcasting: 1\nI0719 12:11:28.416321 2209 log.go:172] (0xc0006e6a50) Go away received\nI0719 12:11:28.416915 2209 log.go:172] (0xc0006e6a50) (0xc0007400a0) Stream removed, broadcasting: 1\nI0719 12:11:28.416964 2209 log.go:172] (0xc0006e6a50) (0xc000740140) Stream removed, broadcasting: 3\nI0719 12:11:28.416986 2209 log.go:172] (0xc0006e6a50) (0xc0007401e0) Stream removed, broadcasting: 5\n" Jul 19 12:11:28.421: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 19 12:11:28.421: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 19 12:11:28.421: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 12:11:28.425: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 19 12:11:39.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 19 12:11:39.757: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 19 12:11:39.757: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 19 12:11:40.344: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:40.344: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:40.344: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:40.344: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:40.344: INFO: Jul 19 12:11:40.344: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:41.700: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:41.700: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:41.700: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:41.700: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:41.700: INFO: Jul 19 12:11:41.700: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:42.790: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:42.790: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:42.790: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:42.790: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:42.790: INFO: Jul 19 12:11:42.790: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:43.793: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:43.794: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:43.794: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:43.794: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:43.794: INFO: Jul 19 12:11:43.794: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:44.798: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:44.798: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:44.798: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:44.798: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:44.798: INFO: Jul 19 12:11:44.798: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:45.802: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:45.802: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:45.802: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:45.802: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:45.802: INFO: Jul 19 12:11:45.802: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:46.807: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:46.807: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:46.807: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:46.807: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:46.807: INFO: Jul 19 12:11:46.807: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:47.868: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:47.868: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:47.868: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:47.868: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:15 +0000 UTC }] Jul 19 12:11:47.868: INFO: Jul 19 12:11:47.868: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 12:11:48.880: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 12:11:48.880: INFO: ss-0 jerma-worker2 Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:11:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 12:10:55 +0000 UTC }] Jul 19 12:11:48.880: INFO: Jul 19 12:11:48.880: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 12:11:49.898: INFO: Verifying statefulset ss doesn't scale past 0 for another 192.478562ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6607 Jul 19 12:11:50.902: INFO: Scaling statefulset ss to 0 Jul 19 12:11:50.912: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 19 12:11:50.915: INFO: Deleting all statefulset in ns statefulset-6607 Jul 19 12:11:50.917: INFO: Scaling statefulset ss to 0 Jul 19 12:11:50.930: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 12:11:50.932: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:11:51.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6607" for this suite. • [SLOW TEST:56.770 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":135,"skipped":2090,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:11:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7458 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 19 12:11:51.259: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 19 12:12:18.829: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.24 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7458 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:12:18.829: INFO: >>> kubeConfig: /root/.kube/config I0719 12:12:18.865059 6 log.go:172] (0xc0025d7970) (0xc001f91d60) Create stream I0719 12:12:18.865091 6 log.go:172] (0xc0025d7970) (0xc001f91d60) Stream added, broadcasting: 1 I0719 12:12:18.867136 6 log.go:172] (0xc0025d7970) Reply frame received for 1 I0719 12:12:18.867164 6 log.go:172] (0xc0025d7970) (0xc00281aa00) Create stream I0719 12:12:18.867173 6 log.go:172] (0xc0025d7970) (0xc00281aa00) Stream added, broadcasting: 3 I0719 12:12:18.867903 6 log.go:172] (0xc0025d7970) Reply frame received for 3 I0719 12:12:18.867928 6 log.go:172] (0xc0025d7970) (0xc00142ec80) Create stream I0719 12:12:18.867936 6 log.go:172] (0xc0025d7970) (0xc00142ec80) Stream added, broadcasting: 5 I0719 12:12:18.868672 6 log.go:172] (0xc0025d7970) Reply frame received for 5 I0719 12:12:19.917965 6 log.go:172] (0xc0025d7970) Data frame received for 3 I0719 12:12:19.918030 6 log.go:172] (0xc00281aa00) (3) Data frame handling I0719 12:12:19.918060 6 log.go:172] (0xc00281aa00) (3) Data frame sent I0719 12:12:19.918110 6 log.go:172] (0xc0025d7970) Data frame received for 5 I0719 12:12:19.918143 6 log.go:172] (0xc00142ec80) (5) Data frame handling I0719 12:12:19.918258 6 log.go:172] (0xc0025d7970) Data frame received for 3 I0719 12:12:19.918271 6 log.go:172] (0xc00281aa00) (3) Data frame handling I0719 12:12:19.920379 6 log.go:172] (0xc0025d7970) Data frame received for 1 I0719 12:12:19.920463 6 log.go:172] (0xc001f91d60) (1) Data frame handling I0719 12:12:19.920534 6 log.go:172] (0xc001f91d60) (1) Data frame sent I0719 12:12:19.920605 6 log.go:172] (0xc0025d7970) (0xc001f91d60) Stream removed, broadcasting: 1 I0719 12:12:19.920684 6 log.go:172] (0xc0025d7970) Go away received I0719 12:12:19.920795 6 log.go:172] (0xc0025d7970) (0xc001f91d60) Stream removed, broadcasting: 1 I0719 12:12:19.920813 6 log.go:172] (0xc0025d7970) (0xc00281aa00) Stream removed, broadcasting: 3 I0719 12:12:19.920823 6 log.go:172] (0xc0025d7970) (0xc00142ec80) Stream removed, broadcasting: 5 Jul 19 12:12:19.920: INFO: Found all expected endpoints: [netserver-0] Jul 19 12:12:19.924: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7458 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 12:12:19.924: INFO: >>> kubeConfig: /root/.kube/config I0719 12:12:19.950223 6 log.go:172] (0xc002d94420) (0xc0027a83c0) Create stream I0719 12:12:19.950253 6 log.go:172] (0xc002d94420) (0xc0027a83c0) Stream added, broadcasting: 1 I0719 12:12:19.952654 6 log.go:172] (0xc002d94420) Reply frame received for 1 I0719 12:12:19.952697 6 log.go:172] (0xc002d94420) (0xc0027a8640) Create stream I0719 12:12:19.952720 6 log.go:172] (0xc002d94420) (0xc0027a8640) Stream added, broadcasting: 3 I0719 12:12:19.953989 6 log.go:172] (0xc002d94420) Reply frame received for 3 I0719 12:12:19.954036 6 log.go:172] (0xc002d94420) (0xc001874320) Create stream I0719 12:12:19.954055 6 log.go:172] (0xc002d94420) (0xc001874320) Stream added, broadcasting: 5 I0719 12:12:19.954974 6 log.go:172] (0xc002d94420) Reply frame received for 5 I0719 12:12:21.013281 6 log.go:172] (0xc002d94420) Data frame received for 3 I0719 12:12:21.013335 6 log.go:172] (0xc0027a8640) (3) Data frame handling I0719 12:12:21.013352 6 log.go:172] (0xc0027a8640) (3) Data frame sent I0719 12:12:21.013363 6 log.go:172] (0xc002d94420) Data frame received for 3 I0719 12:12:21.013392 6 log.go:172] (0xc002d94420) Data frame received for 5 I0719 12:12:21.013422 6 log.go:172] (0xc001874320) (5) Data frame handling I0719 12:12:21.013460 6 log.go:172] (0xc0027a8640) (3) Data frame handling I0719 12:12:21.015909 6 log.go:172] (0xc002d94420) Data frame received for 1 I0719 12:12:21.015958 6 log.go:172] (0xc0027a83c0) (1) Data frame handling I0719 12:12:21.016049 6 log.go:172] (0xc0027a83c0) (1) Data frame sent I0719 12:12:21.016093 6 log.go:172] (0xc002d94420) (0xc0027a83c0) Stream removed, broadcasting: 1 I0719 12:12:21.016183 6 log.go:172] (0xc002d94420) Go away received I0719 12:12:21.016268 6 log.go:172] (0xc002d94420) (0xc0027a83c0) Stream removed, broadcasting: 1 I0719 12:12:21.016311 6 log.go:172] (0xc002d94420) (0xc0027a8640) Stream removed, broadcasting: 3 I0719 12:12:21.016339 6 log.go:172] (0xc002d94420) (0xc001874320) Stream removed, broadcasting: 5 Jul 19 12:12:21.016: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 19 12:12:21.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7458" for this suite. • [SLOW TEST:29.880 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2090,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 19 12:12:21.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 19 12:12:21.438: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:12:24.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3950" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":138,"skipped":2116,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:12:24.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-56d36a3b-6cf4-45b8-aade-ce8e017aaf0f
STEP: Creating a pod to test consume configMaps
Jul 19 12:12:25.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a" in namespace "configmap-8184" to be "success or failure"
Jul 19 12:12:25.223: INFO: Pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.723031ms
Jul 19 12:12:27.389: INFO: Pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177512339s
Jul 19 12:12:29.419: INFO: Pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206825065s
Jul 19 12:12:31.423: INFO: Pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210882642s
STEP: Saw pod success
Jul 19 12:12:31.423: INFO: Pod "pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a" satisfied condition "success or failure"
Jul 19 12:12:31.426: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a container configmap-volume-test: 
STEP: delete the pod
Jul 19 12:12:31.850: INFO: Waiting for pod pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a to disappear
Jul 19 12:12:31.853: INFO: Pod pod-configmaps-1c5655c4-4763-4938-a646-89269dd70d4a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:12:31.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8184" for this suite.

• [SLOW TEST:6.954 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2127,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:12:31.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-b54bdf54-dc1e-461a-b0aa-958d20f27d4b
STEP: Creating a pod to test consume secrets
Jul 19 12:12:32.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b" in namespace "projected-9840" to be "success or failure"
Jul 19 12:12:32.488: INFO: Pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153677ms
Jul 19 12:12:34.787: INFO: Pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301244237s
Jul 19 12:12:37.186: INFO: Pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.700269849s
Jul 19 12:12:39.190: INFO: Pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.703860346s
STEP: Saw pod success
Jul 19 12:12:39.190: INFO: Pod "pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b" satisfied condition "success or failure"
Jul 19 12:12:39.192: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b container projected-secret-volume-test: 
STEP: delete the pod
Jul 19 12:12:39.305: INFO: Waiting for pod pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b to disappear
Jul 19 12:12:39.308: INFO: Pod pod-projected-secrets-4dc01e6e-34a5-4035-9650-c04a04b7ec2b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:12:39.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9840" for this suite.

• [SLOW TEST:7.800 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2155,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:12:39.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-9871
STEP: creating replication controller nodeport-test in namespace services-9871
I0719 12:12:41.221946       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9871, replica count: 2
I0719 12:12:44.272374       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:12:47.272634       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 19 12:12:47.272: INFO: Creating new exec pod
Jul 19 12:12:54.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9871 execpodflccg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul 19 12:13:10.298: INFO: stderr: "I0719 12:13:10.209040    2232 log.go:172] (0xc0004ca2c0) (0xc00030b540) Create stream\nI0719 12:13:10.209082    2232 log.go:172] (0xc0004ca2c0) (0xc00030b540) Stream added, broadcasting: 1\nI0719 12:13:10.212155    2232 log.go:172] (0xc0004ca2c0) Reply frame received for 1\nI0719 12:13:10.212206    2232 log.go:172] (0xc0004ca2c0) (0xc00070bd60) Create stream\nI0719 12:13:10.212222    2232 log.go:172] (0xc0004ca2c0) (0xc00070bd60) Stream added, broadcasting: 3\nI0719 12:13:10.213255    2232 log.go:172] (0xc0004ca2c0) Reply frame received for 3\nI0719 12:13:10.213300    2232 log.go:172] (0xc0004ca2c0) (0xc00070be00) Create stream\nI0719 12:13:10.213310    2232 log.go:172] (0xc0004ca2c0) (0xc00070be00) Stream added, broadcasting: 5\nI0719 12:13:10.214376    2232 log.go:172] (0xc0004ca2c0) Reply frame received for 5\nI0719 12:13:10.289931    2232 log.go:172] (0xc0004ca2c0) Data frame received for 5\nI0719 12:13:10.290067    2232 log.go:172] (0xc00070be00) (5) Data frame handling\nI0719 12:13:10.290103    2232 log.go:172] (0xc00070be00) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0719 12:13:10.290375    2232 log.go:172] (0xc0004ca2c0) Data frame received for 5\nI0719 12:13:10.290408    2232 log.go:172] (0xc00070be00) (5) Data frame handling\nI0719 12:13:10.290437    2232 log.go:172] (0xc00070be00) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0719 12:13:10.290512    2232 log.go:172] (0xc0004ca2c0) Data frame received for 3\nI0719 12:13:10.290535    2232 log.go:172] (0xc00070bd60) (3) Data frame handling\nI0719 12:13:10.290787    2232 log.go:172] (0xc0004ca2c0) Data frame received for 5\nI0719 12:13:10.290799    2232 log.go:172] (0xc00070be00) (5) Data frame handling\nI0719 12:13:10.292263    2232 log.go:172] (0xc0004ca2c0) Data frame received for 1\nI0719 12:13:10.292285    2232 log.go:172] (0xc00030b540) (1) Data frame handling\nI0719 12:13:10.292300    2232 log.go:172] (0xc00030b540) (1) Data frame sent\nI0719 12:13:10.292312    2232 log.go:172] (0xc0004ca2c0) (0xc00030b540) Stream removed, broadcasting: 1\nI0719 12:13:10.292323    2232 log.go:172] (0xc0004ca2c0) Go away received\nI0719 12:13:10.292693    2232 log.go:172] (0xc0004ca2c0) (0xc00030b540) Stream removed, broadcasting: 1\nI0719 12:13:10.292712    2232 log.go:172] (0xc0004ca2c0) (0xc00070bd60) Stream removed, broadcasting: 3\nI0719 12:13:10.292720    2232 log.go:172] (0xc0004ca2c0) (0xc00070be00) Stream removed, broadcasting: 5\n"
Jul 19 12:13:10.298: INFO: stdout: ""
Jul 19 12:13:10.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9871 execpodflccg -- /bin/sh -x -c nc -zv -t -w 2 10.111.12.16 80'
Jul 19 12:13:10.484: INFO: stderr: "I0719 12:13:10.425318    2258 log.go:172] (0xc0007da9a0) (0xc0007ce000) Create stream\nI0719 12:13:10.425389    2258 log.go:172] (0xc0007da9a0) (0xc0007ce000) Stream added, broadcasting: 1\nI0719 12:13:10.427537    2258 log.go:172] (0xc0007da9a0) Reply frame received for 1\nI0719 12:13:10.427570    2258 log.go:172] (0xc0007da9a0) (0xc000627ae0) Create stream\nI0719 12:13:10.427582    2258 log.go:172] (0xc0007da9a0) (0xc000627ae0) Stream added, broadcasting: 3\nI0719 12:13:10.428344    2258 log.go:172] (0xc0007da9a0) Reply frame received for 3\nI0719 12:13:10.428368    2258 log.go:172] (0xc0007da9a0) (0xc00061c000) Create stream\nI0719 12:13:10.428379    2258 log.go:172] (0xc0007da9a0) (0xc00061c000) Stream added, broadcasting: 5\nI0719 12:13:10.429315    2258 log.go:172] (0xc0007da9a0) Reply frame received for 5\nI0719 12:13:10.479278    2258 log.go:172] (0xc0007da9a0) Data frame received for 3\nI0719 12:13:10.479308    2258 log.go:172] (0xc000627ae0) (3) Data frame handling\nI0719 12:13:10.479340    2258 log.go:172] (0xc0007da9a0) Data frame received for 5\nI0719 12:13:10.479366    2258 log.go:172] (0xc00061c000) (5) Data frame handling\nI0719 12:13:10.479390    2258 log.go:172] (0xc00061c000) (5) Data frame sent\nI0719 12:13:10.479406    2258 log.go:172] (0xc0007da9a0) Data frame received for 5\nI0719 12:13:10.479415    2258 log.go:172] (0xc00061c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.12.16 80\nConnection to 10.111.12.16 80 port [tcp/http] succeeded!\nI0719 12:13:10.479454    2258 log.go:172] (0xc0007da9a0) Data frame received for 1\nI0719 12:13:10.479469    2258 log.go:172] (0xc0007ce000) (1) Data frame handling\nI0719 12:13:10.479488    2258 log.go:172] (0xc0007ce000) (1) Data frame sent\nI0719 12:13:10.479504    2258 log.go:172] (0xc0007da9a0) (0xc0007ce000) Stream removed, broadcasting: 1\nI0719 12:13:10.479518    2258 log.go:172] (0xc0007da9a0) Go away received\nI0719 12:13:10.479968    2258 log.go:172] (0xc0007da9a0) (0xc0007ce000) Stream removed, broadcasting: 1\nI0719 12:13:10.479987    2258 log.go:172] (0xc0007da9a0) (0xc000627ae0) Stream removed, broadcasting: 3\nI0719 12:13:10.479997    2258 log.go:172] (0xc0007da9a0) (0xc00061c000) Stream removed, broadcasting: 5\n"
Jul 19 12:13:10.484: INFO: stdout: ""
Jul 19 12:13:10.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9871 execpodflccg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30798'
Jul 19 12:13:10.673: INFO: stderr: "I0719 12:13:10.600957    2281 log.go:172] (0xc0003c3080) (0xc0007d80a0) Create stream\nI0719 12:13:10.601015    2281 log.go:172] (0xc0003c3080) (0xc0007d80a0) Stream added, broadcasting: 1\nI0719 12:13:10.603155    2281 log.go:172] (0xc0003c3080) Reply frame received for 1\nI0719 12:13:10.603188    2281 log.go:172] (0xc0003c3080) (0xc0006099a0) Create stream\nI0719 12:13:10.603196    2281 log.go:172] (0xc0003c3080) (0xc0006099a0) Stream added, broadcasting: 3\nI0719 12:13:10.604089    2281 log.go:172] (0xc0003c3080) Reply frame received for 3\nI0719 12:13:10.604118    2281 log.go:172] (0xc0003c3080) (0xc000518000) Create stream\nI0719 12:13:10.604128    2281 log.go:172] (0xc0003c3080) (0xc000518000) Stream added, broadcasting: 5\nI0719 12:13:10.607517    2281 log.go:172] (0xc0003c3080) Reply frame received for 5\nI0719 12:13:10.666847    2281 log.go:172] (0xc0003c3080) Data frame received for 5\nI0719 12:13:10.666885    2281 log.go:172] (0xc000518000) (5) Data frame handling\nI0719 12:13:10.666904    2281 log.go:172] (0xc000518000) (5) Data frame sent\nI0719 12:13:10.666919    2281 log.go:172] (0xc0003c3080) Data frame received for 5\nI0719 12:13:10.666932    2281 log.go:172] (0xc000518000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30798\nConnection to 172.18.0.6 30798 port [tcp/30798] succeeded!\nI0719 12:13:10.666984    2281 log.go:172] (0xc000518000) (5) Data frame sent\nI0719 12:13:10.667011    2281 log.go:172] (0xc0003c3080) Data frame received for 3\nI0719 12:13:10.667037    2281 log.go:172] (0xc0006099a0) (3) Data frame handling\nI0719 12:13:10.667221    2281 log.go:172] (0xc0003c3080) Data frame received for 5\nI0719 12:13:10.667242    2281 log.go:172] (0xc000518000) (5) Data frame handling\nI0719 12:13:10.668526    2281 log.go:172] (0xc0003c3080) Data frame received for 1\nI0719 12:13:10.668550    2281 log.go:172] (0xc0007d80a0) (1) Data frame handling\nI0719 12:13:10.668559    2281 log.go:172] (0xc0007d80a0) (1) Data frame sent\nI0719 12:13:10.668574    2281 log.go:172] (0xc0003c3080) (0xc0007d80a0) Stream removed, broadcasting: 1\nI0719 12:13:10.668596    2281 log.go:172] (0xc0003c3080) Go away received\nI0719 12:13:10.669118    2281 log.go:172] (0xc0003c3080) (0xc0007d80a0) Stream removed, broadcasting: 1\nI0719 12:13:10.669154    2281 log.go:172] (0xc0003c3080) (0xc0006099a0) Stream removed, broadcasting: 3\nI0719 12:13:10.669166    2281 log.go:172] (0xc0003c3080) (0xc000518000) Stream removed, broadcasting: 5\n"
Jul 19 12:13:10.673: INFO: stdout: ""
Jul 19 12:13:10.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9871 execpodflccg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 30798'
Jul 19 12:13:10.862: INFO: stderr: "I0719 12:13:10.789209    2301 log.go:172] (0xc000ad6210) (0xc000b320a0) Create stream\nI0719 12:13:10.789297    2301 log.go:172] (0xc000ad6210) (0xc000b320a0) Stream added, broadcasting: 1\nI0719 12:13:10.791094    2301 log.go:172] (0xc000ad6210) Reply frame received for 1\nI0719 12:13:10.791118    2301 log.go:172] (0xc000ad6210) (0xc000b32140) Create stream\nI0719 12:13:10.791127    2301 log.go:172] (0xc000ad6210) (0xc000b32140) Stream added, broadcasting: 3\nI0719 12:13:10.792041    2301 log.go:172] (0xc000ad6210) Reply frame received for 3\nI0719 12:13:10.792087    2301 log.go:172] (0xc000ad6210) (0xc000926000) Create stream\nI0719 12:13:10.792109    2301 log.go:172] (0xc000ad6210) (0xc000926000) Stream added, broadcasting: 5\nI0719 12:13:10.793094    2301 log.go:172] (0xc000ad6210) Reply frame received for 5\nI0719 12:13:10.857774    2301 log.go:172] (0xc000ad6210) Data frame received for 3\nI0719 12:13:10.857796    2301 log.go:172] (0xc000b32140) (3) Data frame handling\nI0719 12:13:10.857825    2301 log.go:172] (0xc000ad6210) Data frame received for 5\nI0719 12:13:10.857848    2301 log.go:172] (0xc000926000) (5) Data frame handling\nI0719 12:13:10.857863    2301 log.go:172] (0xc000926000) (5) Data frame sent\nI0719 12:13:10.857869    2301 log.go:172] (0xc000ad6210) Data frame received for 5\nI0719 12:13:10.857873    2301 log.go:172] (0xc000926000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 30798\nConnection to 172.18.0.10 30798 port [tcp/30798] succeeded!\nI0719 12:13:10.859144    2301 log.go:172] (0xc000ad6210) Data frame received for 1\nI0719 12:13:10.859166    2301 log.go:172] (0xc000b320a0) (1) Data frame handling\nI0719 12:13:10.859190    2301 log.go:172] (0xc000b320a0) (1) Data frame sent\nI0719 12:13:10.859212    2301 log.go:172] (0xc000ad6210) (0xc000b320a0) Stream removed, broadcasting: 1\nI0719 12:13:10.859250    2301 log.go:172] (0xc000ad6210) Go away received\nI0719 12:13:10.859534    2301 log.go:172] (0xc000ad6210) (0xc000b320a0) Stream removed, broadcasting: 1\nI0719 12:13:10.859547    2301 log.go:172] (0xc000ad6210) (0xc000b32140) Stream removed, broadcasting: 3\nI0719 12:13:10.859555    2301 log.go:172] (0xc000ad6210) (0xc000926000) Stream removed, broadcasting: 5\n"
Jul 19 12:13:10.862: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:10.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9871" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:31.208 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":141,"skipped":2170,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:10.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-f385f969-e9e3-4327-b194-b20c3fcf84f3
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:17.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5615" for this suite.

• [SLOW TEST:6.658 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2187,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:17.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-de6ccee1-ef3b-4cc7-b68d-2672d6f643ba
STEP: Creating a pod to test consume secrets
Jul 19 12:13:17.824: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4" in namespace "projected-519" to be "success or failure"
Jul 19 12:13:17.893: INFO: Pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4": Phase="Pending", Reason="", readiness=false. Elapsed: 68.761804ms
Jul 19 12:13:19.952: INFO: Pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128267192s
Jul 19 12:13:21.989: INFO: Pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164926131s
Jul 19 12:13:24.552: INFO: Pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.728591606s
STEP: Saw pod success
Jul 19 12:13:24.553: INFO: Pod "pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4" satisfied condition "success or failure"
Jul 19 12:13:24.602: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4 container projected-secret-volume-test: 
STEP: delete the pod
Jul 19 12:13:25.998: INFO: Waiting for pod pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4 to disappear
Jul 19 12:13:26.246: INFO: Pod pod-projected-secrets-934a76e4-fa80-436f-955b-0bea387017c4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:26.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-519" for this suite.

• [SLOW TEST:8.897 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2211,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:26.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 19 12:13:33.309: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c8261ee7-88ac-4e78-b61e-f3bab0d2e766"
Jul 19 12:13:33.309: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c8261ee7-88ac-4e78-b61e-f3bab0d2e766" in namespace "pods-6818" to be "terminated due to deadline exceeded"
Jul 19 12:13:33.318: INFO: Pod "pod-update-activedeadlineseconds-c8261ee7-88ac-4e78-b61e-f3bab0d2e766": Phase="Running", Reason="", readiness=true. Elapsed: 8.808496ms
Jul 19 12:13:35.321: INFO: Pod "pod-update-activedeadlineseconds-c8261ee7-88ac-4e78-b61e-f3bab0d2e766": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012287897s
Jul 19 12:13:35.321: INFO: Pod "pod-update-activedeadlineseconds-c8261ee7-88ac-4e78-b61e-f3bab0d2e766" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:35.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6818" for this suite.

• [SLOW TEST:8.903 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2228,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:35.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:13:35.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9387" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:39.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jul 19 12:13:40.565: INFO: Waiting up to 5m0s for pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312" in namespace "containers-8659" to be "success or failure"
Jul 19 12:13:41.193: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312": Phase="Pending", Reason="", readiness=false. Elapsed: 628.388284ms
Jul 19 12:13:43.271: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705848266s
Jul 19 12:13:45.490: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312": Phase="Pending", Reason="", readiness=false. Elapsed: 4.925652603s
Jul 19 12:13:47.570: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312": Phase="Pending", Reason="", readiness=false. Elapsed: 7.005054801s
Jul 19 12:13:50.319: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.754261397s
STEP: Saw pod success
Jul 19 12:13:50.319: INFO: Pod "client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312" satisfied condition "success or failure"
Jul 19 12:13:50.540: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312 container test-container: 
STEP: delete the pod
Jul 19 12:13:51.411: INFO: Waiting for pod client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312 to disappear
Jul 19 12:13:51.654: INFO: Pod client-containers-0ecb7d7e-39b9-4133-8187-b384285e1312 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:13:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8659" for this suite.

• [SLOW TEST:11.778 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2311,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:13:51.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-8bb651f1-bff5-45c1-b6c2-1930bc05f171
STEP: Creating a pod to test consume secrets
Jul 19 12:13:53.810: INFO: Waiting up to 5m0s for pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352" in namespace "secrets-1789" to be "success or failure"
Jul 19 12:13:54.074: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Pending", Reason="", readiness=false. Elapsed: 263.504738ms
Jul 19 12:13:56.858: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Pending", Reason="", readiness=false. Elapsed: 3.04747409s
Jul 19 12:13:59.116: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Pending", Reason="", readiness=false. Elapsed: 5.305262141s
Jul 19 12:14:01.128: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Pending", Reason="", readiness=false. Elapsed: 7.317776656s
Jul 19 12:14:03.175: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Running", Reason="", readiness=true. Elapsed: 9.36469876s
Jul 19 12:14:05.179: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.368371707s
STEP: Saw pod success
Jul 19 12:14:05.179: INFO: Pod "pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352" satisfied condition "success or failure"
Jul 19 12:14:05.181: INFO: Trying to get logs from node jerma-worker pod pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352 container secret-volume-test: 
STEP: delete the pod
Jul 19 12:14:05.380: INFO: Waiting for pod pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352 to disappear
Jul 19 12:14:05.393: INFO: Pod pod-secrets-78a85168-4b20-4709-a4a6-e04a4bf2a352 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:14:05.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1789" for this suite.

• [SLOW TEST:13.754 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2313,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:14:05.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-6d957e84-9f1b-44f6-afec-c670f1f51e25
STEP: Creating a pod to test consume secrets
Jul 19 12:14:05.534: INFO: Waiting up to 5m0s for pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7" in namespace "secrets-1215" to be "success or failure"
Jul 19 12:14:05.552: INFO: Pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.041847ms
Jul 19 12:14:07.642: INFO: Pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108729021s
Jul 19 12:14:09.786: INFO: Pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.251920589s
Jul 19 12:14:11.881: INFO: Pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.347312521s
STEP: Saw pod success
Jul 19 12:14:11.881: INFO: Pod "pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7" satisfied condition "success or failure"
Jul 19 12:14:11.883: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7 container secret-volume-test: 
STEP: delete the pod
Jul 19 12:14:11.969: INFO: Waiting for pod pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7 to disappear
Jul 19 12:14:11.974: INFO: Pod pod-secrets-d2608a26-81c6-41b7-a9b8-ba1ebcac09e7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:14:11.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1215" for this suite.

• [SLOW TEST:6.613 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2317,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:14:12.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-1373
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1373 to expose endpoints map[]
Jul 19 12:14:12.384: INFO: Get endpoints failed (93.102697ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 19 12:14:13.387: INFO: successfully validated that service multi-endpoint-test in namespace services-1373 exposes endpoints map[] (1.096403325s elapsed)
STEP: Creating pod pod1 in namespace services-1373
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1373 to expose endpoints map[pod1:[100]]
Jul 19 12:14:17.568: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.17465163s elapsed, will retry)
Jul 19 12:14:18.576: INFO: successfully validated that service multi-endpoint-test in namespace services-1373 exposes endpoints map[pod1:[100]] (5.182561378s elapsed)
STEP: Creating pod pod2 in namespace services-1373
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1373 to expose endpoints map[pod1:[100] pod2:[101]]
Jul 19 12:14:22.953: INFO: successfully validated that service multi-endpoint-test in namespace services-1373 exposes endpoints map[pod1:[100] pod2:[101]] (4.373471963s elapsed)
STEP: Deleting pod pod1 in namespace services-1373
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1373 to expose endpoints map[pod2:[101]]
Jul 19 12:14:22.979: INFO: successfully validated that service multi-endpoint-test in namespace services-1373 exposes endpoints map[pod2:[101]] (20.474718ms elapsed)
STEP: Deleting pod pod2 in namespace services-1373
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1373 to expose endpoints map[]
Jul 19 12:14:23.010: INFO: successfully validated that service multi-endpoint-test in namespace services-1373 exposes endpoints map[] (27.774032ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:14:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1373" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.012 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":149,"skipped":2357,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:14:23.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:14:23.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809" in namespace "downward-api-3098" to be "success or failure"
Jul 19 12:14:23.562: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809": Phase="Pending", Reason="", readiness=false. Elapsed: 14.719608ms
Jul 19 12:14:25.658: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111014563s
Jul 19 12:14:27.799: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251965003s
Jul 19 12:14:29.810: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26260651s
Jul 19 12:14:31.983: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.436478261s
STEP: Saw pod success
Jul 19 12:14:31.983: INFO: Pod "downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809" satisfied condition "success or failure"
Jul 19 12:14:31.986: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809 container client-container: 
STEP: delete the pod
Jul 19 12:14:32.043: INFO: Waiting for pod downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809 to disappear
Jul 19 12:14:32.047: INFO: Pod downwardapi-volume-44f9549a-1227-4e16-b145-9be8d3793809 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:14:32.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3098" for this suite.

• [SLOW TEST:9.010 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2368,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:14:32.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9671.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9671.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9671.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9671.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.193.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.193.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.193.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.193.141_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9671.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9671.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9671.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9671.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9671.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9671.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.193.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.193.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.193.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.193.141_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 19 12:14:43.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:43.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:43.996: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:43.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:44.016: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:44.019: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:44.022: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:44.025: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:44.041: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:14:49.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.337: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.340: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.343: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.346: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:49.362: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:14:54.464: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.470: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.474: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.568: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.571: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:54.595: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:14:59.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.511: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.514: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.550: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.552: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.554: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.557: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:14:59.571: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:15:04.249: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:04.511: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:04.514: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:04.847: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:05.204: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:05.207: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:05.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:05.211: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:05.226: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:15:09.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.047: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.050: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.052: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.249: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: Get https://172.30.12.66:45705/api/v1/namespaces/dns-9671/pods/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2/proxy/results/wheezy_udp@_http._tcp.test-service-2.dns-9671.svc.cluster.local: stream error: stream ID 2023; INTERNAL_ERROR
Jul 19 12:15:09.706: INFO: Unable to read jessie_udp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.713: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local from pod dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2: the server could not find the requested resource (get pods dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2)
Jul 19 12:15:09.736: INFO: Lookups using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 failed for: [wheezy_udp@dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9671.svc.cluster.local jessie_udp@dns-test-service.dns-9671.svc.cluster.local jessie_tcp@dns-test-service.dns-9671.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9671.svc.cluster.local]

Jul 19 12:15:14.097: INFO: DNS probes using dns-9671/dns-test-c4e93f3f-8b7a-4801-acdd-248d18aedae2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:15:18.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9671" for this suite.

• [SLOW TEST:46.997 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":151,"skipped":2379,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:15:19.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:15:35.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5976" for this suite.

• [SLOW TEST:16.479 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2412,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:15:35.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-4ae2b575-aabc-4b27-97fb-b8952d1c4e3b
STEP: Creating secret with name s-test-opt-upd-c353d9b3-bdf8-4057-86e5-a5f97e55aa90
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4ae2b575-aabc-4b27-97fb-b8952d1c4e3b
STEP: Updating secret s-test-opt-upd-c353d9b3-bdf8-4057-86e5-a5f97e55aa90
STEP: Creating secret with name s-test-opt-create-e467634b-d2a5-4cc5-87d2-0fd53f13a60d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:16:58.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8110" for this suite.

• [SLOW TEST:83.317 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2444,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:16:58.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5256 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5256;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5256 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5256;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5256.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5256.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5256.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5256.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.183.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.183.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.183.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.183.5_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5256 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5256;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5256 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5256;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5256.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5256.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5256.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5256.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.183.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.183.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.183.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.183.5_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 19 12:17:09.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.178: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.190: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.193: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.220: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.223: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.230: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.233: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.236: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.239: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:09.258: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:14.263: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.267: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.273: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.285: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.287: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.325: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.327: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.330: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.332: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.334: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.339: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.341: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:14.356: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:19.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.361: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.364: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.374: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.377: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.927: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.929: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.931: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.936: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.938: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.940: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:19.942: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:20.166: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:24.339: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:24.342: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:24.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:25.107: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:25.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:25.682: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:25.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:25.687: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.054: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.056: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.058: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.063: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.066: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.068: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.071: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:26.089: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:29.292: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.295: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.391: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.483: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.486: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.490: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.861: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.865: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.869: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.877: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.882: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.883: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:29.926: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:34.527: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.530: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.533: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.538: INFO: Unable to read wheezy_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.539: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.542: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.544: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.558: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.560: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.563: INFO: Unable to read jessie_udp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.680: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256 from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.684: INFO: Unable to read jessie_udp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.688: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.691: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc from pod dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566: the server could not find the requested resource (get pods dns-test-c4089554-ea96-4399-9eed-8b31b4931566)
Jul 19 12:17:34.704: INFO: Lookups using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5256 wheezy_tcp@dns-test-service.dns-5256 wheezy_udp@dns-test-service.dns-5256.svc wheezy_tcp@dns-test-service.dns-5256.svc wheezy_udp@_http._tcp.dns-test-service.dns-5256.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5256.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5256 jessie_tcp@dns-test-service.dns-5256 jessie_udp@dns-test-service.dns-5256.svc jessie_tcp@dns-test-service.dns-5256.svc jessie_udp@_http._tcp.dns-test-service.dns-5256.svc jessie_tcp@_http._tcp.dns-test-service.dns-5256.svc]

Jul 19 12:17:39.995: INFO: DNS probes using dns-5256/dns-test-c4089554-ea96-4399-9eed-8b31b4931566 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:17:44.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5256" for this suite.

• [SLOW TEST:46.075 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":154,"skipped":2455,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:17:44.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:17:57.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8912" for this suite.

• [SLOW TEST:13.043 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":155,"skipped":2459,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:17:57.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:17:58.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92" in namespace "downward-api-7729" to be "success or failure"
Jul 19 12:17:58.657: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92": Phase="Pending", Reason="", readiness=false. Elapsed: 89.309809ms
Jul 19 12:18:00.661: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092986457s
Jul 19 12:18:02.665: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097042754s
Jul 19 12:18:04.880: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92": Phase="Running", Reason="", readiness=true. Elapsed: 6.312276032s
Jul 19 12:18:06.915: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.347322699s
STEP: Saw pod success
Jul 19 12:18:06.915: INFO: Pod "downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92" satisfied condition "success or failure"
Jul 19 12:18:06.918: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92 container client-container: 
STEP: delete the pod
Jul 19 12:18:07.073: INFO: Waiting for pod downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92 to disappear
Jul 19 12:18:07.088: INFO: Pod downwardapi-volume-70beb576-433e-4ace-9ca0-d7fd59623e92 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:18:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7729" for this suite.

• [SLOW TEST:9.119 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:18:07.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:18:15.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7309" for this suite.

• [SLOW TEST:8.632 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2492,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:18:15.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-58475faf-d701-4045-802e-7cef64b6a6cc
STEP: Creating a pod to test consume secrets
Jul 19 12:18:17.048: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e" in namespace "projected-8820" to be "success or failure"
Jul 19 12:18:17.390: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Pending", Reason="", readiness=false. Elapsed: 341.804566ms
Jul 19 12:18:19.393: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344590396s
Jul 19 12:18:21.586: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537394754s
Jul 19 12:18:23.589: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540545302s
Jul 19 12:18:25.811: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762168304s
Jul 19 12:18:28.179: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Running", Reason="", readiness=true. Elapsed: 11.130173513s
Jul 19 12:18:30.181: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.133081252s
STEP: Saw pod success
Jul 19 12:18:30.181: INFO: Pod "pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e" satisfied condition "success or failure"
Jul 19 12:18:30.183: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e container projected-secret-volume-test: 
STEP: delete the pod
Jul 19 12:18:30.675: INFO: Waiting for pod pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e to disappear
Jul 19 12:18:30.913: INFO: Pod pod-projected-secrets-d58bf514-0088-418b-b2fb-652b6fa4ad4e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:18:30.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8820" for this suite.

• [SLOW TEST:16.431 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2501,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:18:32.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:18:49.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6375" for this suite.

• [SLOW TEST:17.300 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":159,"skipped":2515,"failed":0}
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:18:49.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul 19 12:18:56.554: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7425 pod-service-account-2c4bed55-1f30-4c81-86dc-c8da084d0309 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul 19 12:18:56.753: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7425 pod-service-account-2c4bed55-1f30-4c81-86dc-c8da084d0309 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul 19 12:18:56.939: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7425 pod-service-account-2c4bed55-1f30-4c81-86dc-c8da084d0309 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:18:57.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7425" for this suite.

• [SLOW TEST:7.792 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":160,"skipped":2515,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:18:57.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3024
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 19 12:18:58.821: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 19 12:19:29.201: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostname&protocol=http&host=10.244.2.39&port=8080&tries=1'] Namespace:pod-network-test-3024 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 19 12:19:29.201: INFO: >>> kubeConfig: /root/.kube/config
I0719 12:19:29.229256       6 log.go:172] (0xc002d94790) (0xc001969c20) Create stream
I0719 12:19:29.229293       6 log.go:172] (0xc002d94790) (0xc001969c20) Stream added, broadcasting: 1
I0719 12:19:29.231119       6 log.go:172] (0xc002d94790) Reply frame received for 1
I0719 12:19:29.231174       6 log.go:172] (0xc002d94790) (0xc001e72000) Create stream
I0719 12:19:29.231201       6 log.go:172] (0xc002d94790) (0xc001e72000) Stream added, broadcasting: 3
I0719 12:19:29.232107       6 log.go:172] (0xc002d94790) Reply frame received for 3
I0719 12:19:29.232141       6 log.go:172] (0xc002d94790) (0xc002732b40) Create stream
I0719 12:19:29.232153       6 log.go:172] (0xc002d94790) (0xc002732b40) Stream added, broadcasting: 5
I0719 12:19:29.233050       6 log.go:172] (0xc002d94790) Reply frame received for 5
I0719 12:19:29.314683       6 log.go:172] (0xc002d94790) Data frame received for 3
I0719 12:19:29.314719       6 log.go:172] (0xc001e72000) (3) Data frame handling
I0719 12:19:29.314745       6 log.go:172] (0xc001e72000) (3) Data frame sent
I0719 12:19:29.315102       6 log.go:172] (0xc002d94790) Data frame received for 3
I0719 12:19:29.315122       6 log.go:172] (0xc001e72000) (3) Data frame handling
I0719 12:19:29.315168       6 log.go:172] (0xc002d94790) Data frame received for 5
I0719 12:19:29.315188       6 log.go:172] (0xc002732b40) (5) Data frame handling
I0719 12:19:29.316879       6 log.go:172] (0xc002d94790) Data frame received for 1
I0719 12:19:29.316903       6 log.go:172] (0xc001969c20) (1) Data frame handling
I0719 12:19:29.316915       6 log.go:172] (0xc001969c20) (1) Data frame sent
I0719 12:19:29.316938       6 log.go:172] (0xc002d94790) (0xc001969c20) Stream removed, broadcasting: 1
I0719 12:19:29.316969       6 log.go:172] (0xc002d94790) Go away received
I0719 12:19:29.317045       6 log.go:172] (0xc002d94790) (0xc001969c20) Stream removed, broadcasting: 1
I0719 12:19:29.317066       6 log.go:172] (0xc002d94790) (0xc001e72000) Stream removed, broadcasting: 3
I0719 12:19:29.317077       6 log.go:172] (0xc002d94790) (0xc002732b40) Stream removed, broadcasting: 5
Jul 19 12:19:29.317: INFO: Waiting for responses: map[]
Jul 19 12:19:29.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostname&protocol=http&host=10.244.1.247&port=8080&tries=1'] Namespace:pod-network-test-3024 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 19 12:19:29.354: INFO: >>> kubeConfig: /root/.kube/config
I0719 12:19:29.551411       6 log.go:172] (0xc002d94bb0) (0xc001969f40) Create stream
I0719 12:19:29.551438       6 log.go:172] (0xc002d94bb0) (0xc001969f40) Stream added, broadcasting: 1
I0719 12:19:29.553520       6 log.go:172] (0xc002d94bb0) Reply frame received for 1
I0719 12:19:29.553561       6 log.go:172] (0xc002d94bb0) (0xc001e720a0) Create stream
I0719 12:19:29.553577       6 log.go:172] (0xc002d94bb0) (0xc001e720a0) Stream added, broadcasting: 3
I0719 12:19:29.554369       6 log.go:172] (0xc002d94bb0) Reply frame received for 3
I0719 12:19:29.554397       6 log.go:172] (0xc002d94bb0) (0xc001f908c0) Create stream
I0719 12:19:29.554408       6 log.go:172] (0xc002d94bb0) (0xc001f908c0) Stream added, broadcasting: 5
I0719 12:19:29.555263       6 log.go:172] (0xc002d94bb0) Reply frame received for 5
I0719 12:19:29.609055       6 log.go:172] (0xc002d94bb0) Data frame received for 3
I0719 12:19:29.609076       6 log.go:172] (0xc001e720a0) (3) Data frame handling
I0719 12:19:29.609087       6 log.go:172] (0xc001e720a0) (3) Data frame sent
I0719 12:19:29.609830       6 log.go:172] (0xc002d94bb0) Data frame received for 5
I0719 12:19:29.609856       6 log.go:172] (0xc001f908c0) (5) Data frame handling
I0719 12:19:29.609957       6 log.go:172] (0xc002d94bb0) Data frame received for 3
I0719 12:19:29.609968       6 log.go:172] (0xc001e720a0) (3) Data frame handling
I0719 12:19:29.611290       6 log.go:172] (0xc002d94bb0) Data frame received for 1
I0719 12:19:29.611305       6 log.go:172] (0xc001969f40) (1) Data frame handling
I0719 12:19:29.611311       6 log.go:172] (0xc001969f40) (1) Data frame sent
I0719 12:19:29.611329       6 log.go:172] (0xc002d94bb0) (0xc001969f40) Stream removed, broadcasting: 1
I0719 12:19:29.611341       6 log.go:172] (0xc002d94bb0) Go away received
I0719 12:19:29.611453       6 log.go:172] (0xc002d94bb0) (0xc001969f40) Stream removed, broadcasting: 1
I0719 12:19:29.611493       6 log.go:172] (0xc002d94bb0) (0xc001e720a0) Stream removed, broadcasting: 3
I0719 12:19:29.611510       6 log.go:172] (0xc002d94bb0) (0xc001f908c0) Stream removed, broadcasting: 5
Jul 19 12:19:29.611: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:19:29.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3024" for this suite.

• [SLOW TEST:32.366 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:19:29.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 19 12:19:29.938: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-resource-version 567d1649-ca61-4ca0-88d5-d493b913c7c2 2422482 0 2020-07-19 12:19:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 19 12:19:29.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-resource-version 567d1649-ca61-4ca0-88d5-d493b913c7c2 2422483 0 2020-07-19 12:19:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:19:29.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1159" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":162,"skipped":2548,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:19:30.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:19:47.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6723" for this suite.

• [SLOW TEST:17.912 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":163,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:19:48.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 19 12:19:48.313: INFO: Waiting up to 5m0s for pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c" in namespace "emptydir-5228" to be "success or failure"
Jul 19 12:19:48.364: INFO: Pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.448514ms
Jul 19 12:19:50.412: INFO: Pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098618644s
Jul 19 12:19:52.416: INFO: Pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102497241s
Jul 19 12:19:54.420: INFO: Pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106636478s
STEP: Saw pod success
Jul 19 12:19:54.420: INFO: Pod "pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c" satisfied condition "success or failure"
Jul 19 12:19:54.423: INFO: Trying to get logs from node jerma-worker pod pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c container test-container: 
STEP: delete the pod
Jul 19 12:19:54.460: INFO: Waiting for pod pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c to disappear
Jul 19 12:19:54.463: INFO: Pod pod-ee458f28-b1f5-4003-a2b1-694ccc6e671c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:19:54.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5228" for this suite.

• [SLOW TEST:6.466 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2592,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:19:54.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:19:54.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5475" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":165,"skipped":2607,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:19:54.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5373" for this suite.

• [SLOW TEST:11.872 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":166,"skipped":2636,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:06.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 19 12:20:06.462: INFO: Waiting up to 5m0s for pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e" in namespace "emptydir-9489" to be "success or failure"
Jul 19 12:20:06.466: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797596ms
Jul 19 12:20:08.556: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09413223s
Jul 19 12:20:10.677: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214593257s
Jul 19 12:20:12.700: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e": Phase="Running", Reason="", readiness=true. Elapsed: 6.238130654s
Jul 19 12:20:14.703: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.241321523s
STEP: Saw pod success
Jul 19 12:20:14.703: INFO: Pod "pod-82aec842-ee72-488a-bb8f-4a29542e980e" satisfied condition "success or failure"
Jul 19 12:20:14.706: INFO: Trying to get logs from node jerma-worker pod pod-82aec842-ee72-488a-bb8f-4a29542e980e container test-container: 
STEP: delete the pod
Jul 19 12:20:14.791: INFO: Waiting for pod pod-82aec842-ee72-488a-bb8f-4a29542e980e to disappear
Jul 19 12:20:14.804: INFO: Pod pod-82aec842-ee72-488a-bb8f-4a29542e980e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:14.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9489" for this suite.

• [SLOW TEST:8.400 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2643,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:14.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:20:14.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913" in namespace "downward-api-4370" to be "success or failure"
Jul 19 12:20:14.977: INFO: Pod "downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913": Phase="Pending", Reason="", readiness=false. Elapsed: 9.408033ms
Jul 19 12:20:16.982: INFO: Pod "downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013638863s
Jul 19 12:20:18.986: INFO: Pod "downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017969655s
STEP: Saw pod success
Jul 19 12:20:18.986: INFO: Pod "downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913" satisfied condition "success or failure"
Jul 19 12:20:18.989: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913 container client-container: 
STEP: delete the pod
Jul 19 12:20:19.051: INFO: Waiting for pod downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913 to disappear
Jul 19 12:20:19.215: INFO: Pod downwardapi-volume-acedf470-ea88-491c-b4b8-385e606a0913 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:19.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4370" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2664,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:19.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:20:19.786: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:20:21.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758019, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758019, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758019, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758019, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:20:24.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:25.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3833" for this suite.
STEP: Destroying namespace "webhook-3833-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.009 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":169,"skipped":2670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:25.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:20:25.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6" in namespace "downward-api-2379" to be "success or failure"
Jul 19 12:20:25.375: INFO: Pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.16114ms
Jul 19 12:20:27.378: INFO: Pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041290957s
Jul 19 12:20:29.381: INFO: Pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6": Phase="Running", Reason="", readiness=true. Elapsed: 4.045064923s
Jul 19 12:20:31.386: INFO: Pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049160336s
STEP: Saw pod success
Jul 19 12:20:31.386: INFO: Pod "downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6" satisfied condition "success or failure"
Jul 19 12:20:31.388: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6 container client-container: 
STEP: delete the pod
Jul 19 12:20:31.423: INFO: Waiting for pod downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6 to disappear
Jul 19 12:20:31.427: INFO: Pod downwardapi-volume-a02dcf92-fb9f-4bff-a67a-c304b2346ae6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:31.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2379" for this suite.

• [SLOW TEST:6.199 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2706,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:31.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:31.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8024" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":171,"skipped":2718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:31.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 19 12:20:31.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2618'
Jul 19 12:20:31.779: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 19 12:20:31.779: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495
Jul 19 12:20:33.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2618'
Jul 19 12:20:34.191: INFO: stderr: ""
Jul 19 12:20:34.191: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:34.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2618" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":172,"skipped":2757,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:34.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:20:37.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:20:40.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758037, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:20:42.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758038, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758037, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:20:45.403: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:45.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5449" for this suite.
STEP: Destroying namespace "webhook-5449-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.636 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":173,"skipped":2766,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:45.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-e9655d49-756a-4efb-bd73-92a725cafbec
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e9655d49-756a-4efb-bd73-92a725cafbec
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:52.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5416" for this suite.

• [SLOW TEST:6.531 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2792,"failed":0}
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:52.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul 19 12:20:58.603: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3104 PodName:pod-sharedvolume-5236ebb9-507e-4f17-a927-a35c1268ffad ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 19 12:20:58.603: INFO: >>> kubeConfig: /root/.kube/config
I0719 12:20:58.632258       6 log.go:172] (0xc0025d7ef0) (0xc00291f220) Create stream
I0719 12:20:58.632325       6 log.go:172] (0xc0025d7ef0) (0xc00291f220) Stream added, broadcasting: 1
I0719 12:20:58.633657       6 log.go:172] (0xc0025d7ef0) Reply frame received for 1
I0719 12:20:58.633681       6 log.go:172] (0xc0025d7ef0) (0xc00291f2c0) Create stream
I0719 12:20:58.633688       6 log.go:172] (0xc0025d7ef0) (0xc00291f2c0) Stream added, broadcasting: 3
I0719 12:20:58.634247       6 log.go:172] (0xc0025d7ef0) Reply frame received for 3
I0719 12:20:58.634268       6 log.go:172] (0xc0025d7ef0) (0xc00172c960) Create stream
I0719 12:20:58.634279       6 log.go:172] (0xc0025d7ef0) (0xc00172c960) Stream added, broadcasting: 5
I0719 12:20:58.634896       6 log.go:172] (0xc0025d7ef0) Reply frame received for 5
I0719 12:20:58.696458       6 log.go:172] (0xc0025d7ef0) Data frame received for 5
I0719 12:20:58.696497       6 log.go:172] (0xc00172c960) (5) Data frame handling
I0719 12:20:58.696529       6 log.go:172] (0xc0025d7ef0) Data frame received for 3
I0719 12:20:58.696543       6 log.go:172] (0xc00291f2c0) (3) Data frame handling
I0719 12:20:58.696559       6 log.go:172] (0xc00291f2c0) (3) Data frame sent
I0719 12:20:58.696576       6 log.go:172] (0xc0025d7ef0) Data frame received for 3
I0719 12:20:58.696586       6 log.go:172] (0xc00291f2c0) (3) Data frame handling
I0719 12:20:58.697835       6 log.go:172] (0xc0025d7ef0) Data frame received for 1
I0719 12:20:58.697858       6 log.go:172] (0xc00291f220) (1) Data frame handling
I0719 12:20:58.697869       6 log.go:172] (0xc00291f220) (1) Data frame sent
I0719 12:20:58.697883       6 log.go:172] (0xc0025d7ef0) (0xc00291f220) Stream removed, broadcasting: 1
I0719 12:20:58.697907       6 log.go:172] (0xc0025d7ef0) Go away received
I0719 12:20:58.698071       6 log.go:172] (0xc0025d7ef0) (0xc00291f220) Stream removed, broadcasting: 1
I0719 12:20:58.698087       6 log.go:172] (0xc0025d7ef0) (0xc00291f2c0) Stream removed, broadcasting: 3
I0719 12:20:58.698095       6 log.go:172] (0xc0025d7ef0) (0xc00172c960) Stream removed, broadcasting: 5
Jul 19 12:20:58.698: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:20:58.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3104" for this suite.

• [SLOW TEST:6.338 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":175,"skipped":2792,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:20:58.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:20:58.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jul 19 12:20:59.473: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:20:59Z generation:1 name:name1 resourceVersion:2423130 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:966c2632-6c8e-42d1-aacb-e4ab0a882282] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jul 19 12:21:09.478: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:21:09Z generation:1 name:name2 resourceVersion:2423176 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:affa252a-b1be-4c18-8bc4-a0002bc54bff] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jul 19 12:21:19.516: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:20:59Z generation:2 name:name1 resourceVersion:2423206 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:966c2632-6c8e-42d1-aacb-e4ab0a882282] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jul 19 12:21:29.522: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:21:09Z generation:2 name:name2 resourceVersion:2423239 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:affa252a-b1be-4c18-8bc4-a0002bc54bff] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jul 19 12:21:39.531: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:20:59Z generation:2 name:name1 resourceVersion:2423266 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:966c2632-6c8e-42d1-aacb-e4ab0a882282] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jul 19 12:21:49.539: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-19T12:21:09Z generation:2 name:name2 resourceVersion:2423294 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:affa252a-b1be-4c18-8bc4-a0002bc54bff] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:00.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-7019" for this suite.

• [SLOW TEST:61.352 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":176,"skipped":2804,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:00.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:22:00.177: INFO: Create a RollingUpdate DaemonSet
Jul 19 12:22:00.179: INFO: Check that daemon pods launch on every node of the cluster
Jul 19 12:22:00.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:00.232: INFO: Number of nodes with available pods: 0
Jul 19 12:22:00.232: INFO: Node jerma-worker is running more than one daemon pod
Jul 19 12:22:01.237: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:01.241: INFO: Number of nodes with available pods: 0
Jul 19 12:22:01.241: INFO: Node jerma-worker is running more than one daemon pod
Jul 19 12:22:02.506: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:02.509: INFO: Number of nodes with available pods: 0
Jul 19 12:22:02.509: INFO: Node jerma-worker is running more than one daemon pod
Jul 19 12:22:03.237: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:03.239: INFO: Number of nodes with available pods: 0
Jul 19 12:22:03.239: INFO: Node jerma-worker is running more than one daemon pod
Jul 19 12:22:04.239: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:04.241: INFO: Number of nodes with available pods: 0
Jul 19 12:22:04.241: INFO: Node jerma-worker is running more than one daemon pod
Jul 19 12:22:05.320: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:05.324: INFO: Number of nodes with available pods: 2
Jul 19 12:22:05.324: INFO: Number of running nodes: 2, number of available pods: 2
Jul 19 12:22:05.324: INFO: Update the DaemonSet to trigger a rollout
Jul 19 12:22:05.339: INFO: Updating DaemonSet daemon-set
Jul 19 12:22:10.355: INFO: Roll back the DaemonSet before rollout is complete
Jul 19 12:22:10.361: INFO: Updating DaemonSet daemon-set
Jul 19 12:22:10.361: INFO: Make sure DaemonSet rollback is complete
Jul 19 12:22:10.374: INFO: Wrong image for pod: daemon-set-5rlcl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 19 12:22:10.374: INFO: Pod daemon-set-5rlcl is not available
Jul 19 12:22:10.392: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:11.396: INFO: Wrong image for pod: daemon-set-5rlcl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 19 12:22:11.396: INFO: Pod daemon-set-5rlcl is not available
Jul 19 12:22:11.400: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 19 12:22:12.397: INFO: Pod daemon-set-mz7wb is not available
Jul 19 12:22:12.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8510, will wait for the garbage collector to delete the pods
Jul 19 12:22:12.467: INFO: Deleting DaemonSet.extensions daemon-set took: 5.7332ms
Jul 19 12:22:12.967: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.235138ms
Jul 19 12:22:16.970: INFO: Number of nodes with available pods: 0
Jul 19 12:22:16.970: INFO: Number of running nodes: 0, number of available pods: 0
Jul 19 12:22:16.972: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8510/daemonsets","resourceVersion":"2423458"},"items":null}

Jul 19 12:22:16.974: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8510/pods","resourceVersion":"2423458"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:16.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8510" for this suite.

• [SLOW TEST:16.932 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":177,"skipped":2823,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:16.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:17.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6635" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:17.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 19 12:22:17.563: INFO: Waiting up to 5m0s for pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c" in namespace "emptydir-7973" to be "success or failure"
Jul 19 12:22:17.614: INFO: Pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.688319ms
Jul 19 12:22:19.618: INFO: Pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055051209s
Jul 19 12:22:21.621: INFO: Pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.058296163s
Jul 19 12:22:23.634: INFO: Pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071311341s
STEP: Saw pod success
Jul 19 12:22:23.634: INFO: Pod "pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c" satisfied condition "success or failure"
Jul 19 12:22:23.637: INFO: Trying to get logs from node jerma-worker2 pod pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c container test-container: 
STEP: delete the pod
Jul 19 12:22:23.659: INFO: Waiting for pod pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c to disappear
Jul 19 12:22:23.664: INFO: Pod pod-d492ddc2-e10b-45b7-a162-bcf49e69ae4c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:23.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7973" for this suite.

• [SLOW TEST:6.227 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2870,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:23.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:22:23.837: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul 19 12:22:28.841: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 19 12:22:28.841: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul 19 12:22:28.913: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3078 /apis/apps/v1/namespaces/deployment-3078/deployments/test-cleanup-deployment 81dab8ce-f652-4c6f-abb1-4bf21d389bd5 2423575 1 2020-07-19 12:22:28 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00537ebc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jul 19 12:22:28.943: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-3078 /apis/apps/v1/namespaces/deployment-3078/replicasets/test-cleanup-deployment-55ffc6b7b6 77d9864f-afbe-4f9a-abde-5e9f2fd2d63d 2423582 1 2020-07-19 12:22:28 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 81dab8ce-f652-4c6f-abb1-4bf21d389bd5 0xc00537eff7 0xc00537eff8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00537f068  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 19 12:22:28.943: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul 19 12:22:28.943: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-3078 /apis/apps/v1/namespaces/deployment-3078/replicasets/test-cleanup-controller aee4c3ba-1c7c-4bd5-9270-e772cba3eaaf 2423576 1 2020-07-19 12:22:23 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 81dab8ce-f652-4c6f-abb1-4bf21d389bd5 0xc00537eeff 0xc00537ef10}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00537ef88  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 19 12:22:29.039: INFO: Pod "test-cleanup-controller-gzs27" is available:
&Pod{ObjectMeta:{test-cleanup-controller-gzs27 test-cleanup-controller- deployment-3078 /api/v1/namespaces/deployment-3078/pods/test-cleanup-controller-gzs27 2549e682-b67c-4701-b40c-13ced880c82e 2423564 0 2020-07-19 12:22:23 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller aee4c3ba-1c7c-4bd5-9270-e772cba3eaaf 0xc00537f4a7 0xc00537f4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x4tnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x4tnk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x4tnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:22:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:22:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.49,StartTime:2020-07-19 12:22:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 12:22:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6bb5dbea1414a8210b6390642e352beae7949259da2e5b749d8d1cb2805fb43a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 19 12:22:29.039: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-l7twk" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-l7twk test-cleanup-deployment-55ffc6b7b6- deployment-3078 /api/v1/namespaces/deployment-3078/pods/test-cleanup-deployment-55ffc6b7b6-l7twk 6598ae1e-76c1-46da-9f30-47f7e0fdb750 2423583 0 2020-07-19 12:22:28 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 77d9864f-afbe-4f9a-abde-5e9f2fd2d63d 0xc00537f637 0xc00537f638}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x4tnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x4tnk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x4tnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:29.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3078" for this suite.

• [SLOW TEST:5.380 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":180,"skipped":2900,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:29.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 19 12:22:29.178: INFO: Waiting up to 5m0s for pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a" in namespace "emptydir-7718" to be "success or failure"
Jul 19 12:22:29.188: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.954833ms
Jul 19 12:22:31.371: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193580446s
Jul 19 12:22:33.727: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548986394s
Jul 19 12:22:35.891: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713152208s
Jul 19 12:22:38.014: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Running", Reason="", readiness=true. Elapsed: 8.836236926s
Jul 19 12:22:40.017: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.83967451s
STEP: Saw pod success
Jul 19 12:22:40.017: INFO: Pod "pod-9c2f1fa6-8348-4526-a879-62d03865595a" satisfied condition "success or failure"
Jul 19 12:22:40.020: INFO: Trying to get logs from node jerma-worker2 pod pod-9c2f1fa6-8348-4526-a879-62d03865595a container test-container: 
STEP: delete the pod
Jul 19 12:22:40.062: INFO: Waiting for pod pod-9c2f1fa6-8348-4526-a879-62d03865595a to disappear
Jul 19 12:22:40.072: INFO: Pod pod-9c2f1fa6-8348-4526-a879-62d03865595a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:40.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7718" for this suite.

• [SLOW TEST:11.027 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:40.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:22:40.581: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul 19 12:22:42.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758160, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758160, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758160, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758160, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:22:45.773: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:22:45.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:47.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3486" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:7.097 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":182,"skipped":2928,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:47.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:51.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2001" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:51.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:22:52.660: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:22:54.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758172, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758172, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758172, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730758172, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:22:57.931: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:22:58.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9090" for this suite.
STEP: Destroying namespace "webhook-9090-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.209 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":184,"skipped":2973,"failed":0}
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:22:58.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 19 12:23:03.255: INFO: Successfully updated pod "annotationupdatea79083d3-5f88-4b65-9c25-2efb9707650e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:23:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1715" for this suite.

• [SLOW TEST:6.782 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:23:05.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 19 12:23:09.566: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:23:09.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-430" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2998,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:23:09.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-fd57dc6c-6cd2-40e8-a787-2d7e834c78cf
STEP: Creating a pod to test consume secrets
Jul 19 12:23:09.698: INFO: Waiting up to 5m0s for pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68" in namespace "secrets-847" to be "success or failure"
Jul 19 12:23:09.702: INFO: Pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68": Phase="Pending", Reason="", readiness=false. Elapsed: 3.943348ms
Jul 19 12:23:11.706: INFO: Pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787343s
Jul 19 12:23:13.710: INFO: Pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011806219s
Jul 19 12:23:15.713: INFO: Pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01520918s
STEP: Saw pod success
Jul 19 12:23:15.713: INFO: Pod "pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68" satisfied condition "success or failure"
Jul 19 12:23:15.716: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68 container secret-env-test: 
STEP: delete the pod
Jul 19 12:23:15.741: INFO: Waiting for pod pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68 to disappear
Jul 19 12:23:15.747: INFO: Pod pod-secrets-cb4566ae-02b8-4b2f-9da7-d9a1ebb4ae68 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:23:15.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-847" for this suite.

• [SLOW TEST:6.108 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3000,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:23:15.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:23:15.854: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e" in namespace "projected-2314" to be "success or failure"
Jul 19 12:23:15.936: INFO: Pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e": Phase="Pending", Reason="", readiness=false. Elapsed: 81.402065ms
Jul 19 12:23:17.940: INFO: Pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086091317s
Jul 19 12:23:19.945: INFO: Pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e": Phase="Running", Reason="", readiness=true. Elapsed: 4.090445431s
Jul 19 12:23:21.949: INFO: Pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094607837s
STEP: Saw pod success
Jul 19 12:23:21.949: INFO: Pod "downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e" satisfied condition "success or failure"
Jul 19 12:23:21.952: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e container client-container: 
STEP: delete the pod
Jul 19 12:23:22.076: INFO: Waiting for pod downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e to disappear
Jul 19 12:23:22.106: INFO: Pod downwardapi-volume-49e09d59-d2f6-473d-910a-8c1be805e36e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:23:22.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2314" for this suite.

• [SLOW TEST:6.361 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:23:22.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:23:26.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1820" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":189,"skipped":3082,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:23:26.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-092a21dd-26b8-49fc-afd9-cc14e27ed6e3 in namespace container-probe-4757
Jul 19 12:23:30.864: INFO: Started pod busybox-092a21dd-26b8-49fc-afd9-cc14e27ed6e3 in namespace container-probe-4757
STEP: checking the pod's current state and verifying that restartCount is present
Jul 19 12:23:30.867: INFO: Initial restart count of pod busybox-092a21dd-26b8-49fc-afd9-cc14e27ed6e3 is 0
Jul 19 12:24:21.072: INFO: Restart count of pod container-probe-4757/busybox-092a21dd-26b8-49fc-afd9-cc14e27ed6e3 is now 1 (50.205672022s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:24:21.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4757" for this suite.

• [SLOW TEST:54.558 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3091,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:24:21.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:24:37.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1636" for this suite.

• [SLOW TEST:16.376 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":191,"skipped":3122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:24:37.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 19 12:24:42.298: INFO: Successfully updated pod "annotationupdate5115b1cd-bb41-45f3-baa2-6e54577f46af"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:24:46.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1077" for this suite.

• [SLOW TEST:8.824 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3172,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:24:46.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-7xrb
STEP: Creating a pod to test atomic-volume-subpath
Jul 19 12:24:46.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7xrb" in namespace "subpath-194" to be "success or failure"
Jul 19 12:24:46.556: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.714743ms
Jul 19 12:24:48.560: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020009319s
Jul 19 12:24:50.564: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024148021s
Jul 19 12:24:52.662: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 6.122510164s
Jul 19 12:24:54.666: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 8.125917691s
Jul 19 12:24:56.819: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 10.278653179s
Jul 19 12:24:59.106: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 12.56585329s
Jul 19 12:25:01.110: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 14.569711773s
Jul 19 12:25:03.114: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 16.574302159s
Jul 19 12:25:05.118: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 18.578268618s
Jul 19 12:25:07.122: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 20.582537305s
Jul 19 12:25:09.201: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 22.661298276s
Jul 19 12:25:11.205: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Running", Reason="", readiness=true. Elapsed: 24.665056281s
Jul 19 12:25:13.208: INFO: Pod "pod-subpath-test-downwardapi-7xrb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.667929526s
STEP: Saw pod success
Jul 19 12:25:13.208: INFO: Pod "pod-subpath-test-downwardapi-7xrb" satisfied condition "success or failure"
Jul 19 12:25:13.210: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-7xrb container test-container-subpath-downwardapi-7xrb: 
STEP: delete the pod
Jul 19 12:25:13.307: INFO: Waiting for pod pod-subpath-test-downwardapi-7xrb to disappear
Jul 19 12:25:13.405: INFO: Pod pod-subpath-test-downwardapi-7xrb no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-7xrb
Jul 19 12:25:13.405: INFO: Deleting pod "pod-subpath-test-downwardapi-7xrb" in namespace "subpath-194"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:25:13.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-194" for this suite.

• [SLOW TEST:27.058 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":193,"skipped":3173,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:25:13.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul 19 12:25:13.554: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 19 12:25:13.571: INFO: Waiting for terminating namespaces to be deleted...
Jul 19 12:25:13.574: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul 19 12:25:13.579: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 19 12:25:13.579: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:25:13.580: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 19 12:25:13.580: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 19 12:25:13.580: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul 19 12:25:13.585: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 19 12:25:13.585: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:25:13.585: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 19 12:25:13.585: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162326f1f1ab48a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:25:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1553" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":194,"skipped":3194,"failed":0}

------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:25:14.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b in namespace container-probe-8333
Jul 19 12:25:20.718: INFO: Started pod liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b in namespace container-probe-8333
STEP: checking the pod's current state and verifying that restartCount is present
Jul 19 12:25:20.722: INFO: Initial restart count of pod liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is 0
Jul 19 12:25:41.051: INFO: Restart count of pod container-probe-8333/liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is now 1 (20.329586408s elapsed)
Jul 19 12:26:01.090: INFO: Restart count of pod container-probe-8333/liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is now 2 (40.368461252s elapsed)
Jul 19 12:26:21.127: INFO: Restart count of pod container-probe-8333/liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is now 3 (1m0.405038741s elapsed)
Jul 19 12:26:41.172: INFO: Restart count of pod container-probe-8333/liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is now 4 (1m20.45078186s elapsed)
Jul 19 12:27:43.521: INFO: Restart count of pod container-probe-8333/liveness-ea0d5281-401b-4a49-911d-7a09d7de8d3b is now 5 (2m22.799445077s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:27:43.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8333" for this suite.

• [SLOW TEST:148.962 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3194,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:27:43.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-71e4cd0a-61ae-49f4-adcf-1438c40b3a18
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:27:43.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7942" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":196,"skipped":3225,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:27:43.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul 19 12:27:44.123: INFO: namespace kubectl-1714
Jul 19 12:27:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1714'
Jul 19 12:27:57.342: INFO: stderr: ""
Jul 19 12:27:57.342: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 19 12:27:58.378: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:27:58.378: INFO: Found 0 / 1
Jul 19 12:27:59.345: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:27:59.345: INFO: Found 0 / 1
Jul 19 12:28:00.346: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:28:00.346: INFO: Found 0 / 1
Jul 19 12:28:01.432: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:28:01.432: INFO: Found 0 / 1
Jul 19 12:28:02.346: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:28:02.346: INFO: Found 0 / 1
Jul 19 12:28:03.346: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:28:03.346: INFO: Found 1 / 1
Jul 19 12:28:03.346: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 19 12:28:03.349: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:28:03.349: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 19 12:28:03.349: INFO: wait on agnhost-master startup in kubectl-1714 
Jul 19 12:28:03.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-n7jdt agnhost-master --namespace=kubectl-1714'
Jul 19 12:28:03.464: INFO: stderr: ""
Jul 19 12:28:03.464: INFO: stdout: "Paused\n"
STEP: exposing RC
Jul 19 12:28:03.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1714'
Jul 19 12:28:03.603: INFO: stderr: ""
Jul 19 12:28:03.603: INFO: stdout: "service/rm2 exposed\n"
Jul 19 12:28:03.623: INFO: Service rm2 in namespace kubectl-1714 found.
STEP: exposing service
Jul 19 12:28:05.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1714'
Jul 19 12:28:05.755: INFO: stderr: ""
Jul 19 12:28:05.755: INFO: stdout: "service/rm3 exposed\n"
Jul 19 12:28:05.797: INFO: Service rm3 in namespace kubectl-1714 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:28:07.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1714" for this suite.

• [SLOW TEST:24.121 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":197,"skipped":3235,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:28:07.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357
STEP: creating an pod
Jul 19 12:28:07.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4500 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jul 19 12:28:07.970: INFO: stderr: ""
Jul 19 12:28:07.970: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jul 19 12:28:07.970: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jul 19 12:28:07.970: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4500" to be "running and ready, or succeeded"
Jul 19 12:28:07.973: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.877787ms
Jul 19 12:28:09.977: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006890172s
Jul 19 12:28:11.981: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.010532835s
Jul 19 12:28:11.981: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jul 19 12:28:11.981: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jul 19 12:28:11.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4500'
Jul 19 12:28:12.092: INFO: stderr: ""
Jul 19 12:28:12.092: INFO: stdout: "I0719 12:28:11.244746       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/2ccq 416\nI0719 12:28:11.444923       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/bl85 446\nI0719 12:28:11.644907       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/d7w 338\nI0719 12:28:11.845075       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/qvq 284\nI0719 12:28:12.044897       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/9bm 232\n"
STEP: limiting log lines
Jul 19 12:28:12.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4500 --tail=1'
Jul 19 12:28:12.440: INFO: stderr: ""
Jul 19 12:28:12.441: INFO: stdout: "I0719 12:28:12.244922       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/zcd 559\n"
Jul 19 12:28:12.441: INFO: got output "I0719 12:28:12.244922       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/zcd 559\n"
STEP: limiting log bytes
Jul 19 12:28:12.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4500 --limit-bytes=1'
Jul 19 12:28:12.590: INFO: stderr: ""
Jul 19 12:28:12.590: INFO: stdout: "I"
Jul 19 12:28:12.590: INFO: got output "I"
STEP: exposing timestamps
Jul 19 12:28:12.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4500 --tail=1 --timestamps'
Jul 19 12:28:12.695: INFO: stderr: ""
Jul 19 12:28:12.695: INFO: stdout: "2020-07-19T12:28:12.445030062Z I0719 12:28:12.444874       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/8mq9 390\n2020-07-19T12:28:12.645022886Z I0719 12:28:12.644894       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/k2kg 240\n"
Jul 19 12:28:12.695: INFO: got output "2020-07-19T12:28:12.445030062Z I0719 12:28:12.444874       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/8mq9 390\n2020-07-19T12:28:12.645022886Z I0719 12:28:12.644894       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/k2kg 240\n"
Jul 19 12:28:12.695: FAIL: Expected
    : 2
to equal
    : 1
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
Jul 19 12:28:12.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4500'
Jul 19 12:28:27.359: INFO: stderr: ""
Jul 19 12:28:27.359: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-4500".
STEP: Found 5 events.
Jul 19 12:28:27.375: INFO: At 2020-07-19 12:28:07 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-4500/logs-generator to jerma-worker2
Jul 19 12:28:27.375: INFO: At 2020-07-19 12:28:09 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jul 19 12:28:27.376: INFO: At 2020-07-19 12:28:11 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Created: Created container logs-generator
Jul 19 12:28:27.376: INFO: At 2020-07-19 12:28:11 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Started: Started container logs-generator
Jul 19 12:28:27.376: INFO: At 2020-07-19 12:28:13 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Killing: Stopping container logs-generator
Jul 19 12:28:27.378: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 19 12:28:27.378: INFO: 
Jul 19 12:28:27.382: INFO: 
Logging node info for node jerma-control-plane
Jul 19 12:28:27.384: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane   /api/v1/nodes/jerma-control-plane ac6a40a7-84f7-46ba-8321-2df671b5dd0c 2425056 0 2020-07-10 10:25:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-19 12:27:48 +0000 UTC,LastTransitionTime:2020-07-10 10:25:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-19 12:27:48 +0000 UTC,LastTransitionTime:2020-07-10 10:25:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-19 12:27:48 +0000 UTC,LastTransitionTime:2020-07-10 10:25:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-19 12:27:48 +0000 UTC,LastTransitionTime:2020-07-10 10:26:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78cb62e1bd20401ebc9a91779e3da282,SystemUUID:5fa8becb-168a-4d58-8252-a288ac7a8260,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul 19 12:28:27.384: INFO: 
Logging kubelet events for node jerma-control-plane
Jul 19 12:28:27.387: INFO: 
Logging pods the kubelet thinks is on node jerma-control-plane
Jul 19 12:28:27.413: INFO: coredns-6955765f44-bq97f started at 2020-07-10 10:26:33 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container coredns ready: true, restart count 0
Jul 19 12:28:27.413: INFO: kube-scheduler-jerma-control-plane started at 2020-07-10 10:26:01 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container kube-scheduler ready: true, restart count 0
Jul 19 12:28:27.413: INFO: etcd-jerma-control-plane started at 2020-07-10 10:26:01 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container etcd ready: true, restart count 0
Jul 19 12:28:27.413: INFO: kube-apiserver-jerma-control-plane started at 2020-07-10 10:26:01 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container kube-apiserver ready: true, restart count 0
Jul 19 12:28:27.413: INFO: kindnet-b87md started at 2020-07-10 10:26:15 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 19 12:28:27.413: INFO: coredns-6955765f44-9rqh9 started at 2020-07-10 10:26:31 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container coredns ready: true, restart count 0
Jul 19 12:28:27.413: INFO: kube-controller-manager-jerma-control-plane started at 2020-07-10 10:26:01 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container kube-controller-manager ready: true, restart count 0
Jul 19 12:28:27.413: INFO: kube-proxy-svrlv started at 2020-07-10 10:26:15 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:28:27.413: INFO: local-path-provisioner-58f6947c7-rkzsd started at 2020-07-10 10:26:31 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.413: INFO: 	Container local-path-provisioner ready: true, restart count 0
W0719 12:28:27.416911       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 19 12:28:27.516: INFO: 
Latency metrics for node jerma-control-plane
Jul 19 12:28:27.516: INFO: 
Logging node info for node jerma-worker
Jul 19 12:28:27.520: INFO: Node Info: &Node{ObjectMeta:{jerma-worker   /api/v1/nodes/jerma-worker c432e82b-e36b-4d97-9ae4-607b959ebda9 2424116 0 2020-07-10 10:26:32 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-19 12:23:30 +0000 UTC,LastTransitionTime:2020-07-10 10:26:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-19 12:23:30 +0000 UTC,LastTransitionTime:2020-07-10 10:26:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-19 12:23:30 +0000 UTC,LastTransitionTime:2020-07-10 10:26:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-19 12:23:30 +0000 UTC,LastTransitionTime:2020-07-10 10:27:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45379d8ba8234965b20922045ad7d4f4,SystemUUID:cd2f4e84-28b7-4c1f-be28-c375dfc6f3c7,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:24f49e5e936930e808cd79cac72fd4f2dc87e97b33a9dedecf60d0eb1f655015 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386316854,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:21ea1bbce8747d80fc46c07ee0bdb94653036ee544413853074f39900798a7d8 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360555271,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:ba47a8963e0683886890de11cf65942f3460ec4e2ad313f1e0fe0d144b12969b docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351389939,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:69b5406c3dcf61c95a067571c873b8691dc7cb23b24dbe3749b0a1d2b7c08ca9 docker.io/ollivier/clearwater-homer:latest],SizeBytes:344133365,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:24e9186a8be32af9559f4d198c5c423eaac0d6c7b827c5ab674f2d124385c2fb docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327029020,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:25b1c4759aa4dd92b752451e64f9df5f4a6336d74a15dd5914fbb83ab81ab9f4 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303484988,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:5a833832419bcf25ea1044768038c885ed4bad73225d5d07fc54eebc2a56662b docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298458075,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:6eebbdbc9e424dd87b3d149b9fa1c779ad5c402e2f7ef414ec585a43ebb782d6 docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294998669,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:becf37bf5c8d9f81189d9d727c3c3ab7e032b7de3a710f7bbb264d35d442a344 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287275238,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:83359fb6320eefc0adbf56fcd4eb7a19be2c53dadaa4944a20510cc761536222 docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285335126,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest],SizeBytes:127871601,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04],SizeBytes:46948523,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:7d2da02dca6f486c0f48830ae9d064712a7429523a749953e9fb516ec77637c4 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39175389,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:2bd792eae0d13222bbf7a3641328b2a8cbe80f39c04575d06754e63da6e46cc7 docker.io/aquasec/kube-bench:latest],SizeBytes:8042967,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a16f4454188d4c673716cb0a63d9cc39737e3192482d381d040b6fea1645c35d],SizeBytes:8042939,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a494f14462a61b48075ccd2c8f9e6b866a068e6caa2cdb002949ebf48f56ca2b],SizeBytes:8038933,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul 19 12:28:27.521: INFO: 
Logging kubelet events for node jerma-worker
Jul 19 12:28:27.524: INFO: 
Logging pods the kubelet thinks is on node jerma-worker
Jul 19 12:28:27.539: INFO: kube-proxy-2ssxj started at 2020-07-10 10:26:33 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.539: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:28:27.539: INFO: kindnet-bqk7h started at 2020-07-10 10:26:33 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.539: INFO: 	Container kindnet-cni ready: true, restart count 0
W0719 12:28:27.542813       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 19 12:28:27.582: INFO: 
Latency metrics for node jerma-worker
Jul 19 12:28:27.582: INFO: 
Logging node info for node jerma-worker2
Jul 19 12:28:27.584: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2   /api/v1/nodes/jerma-worker2 ec9cf572-6275-4db1-ae6c-a6f614f21667 2424809 0 2020-07-10 10:26:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-19 12:26:28 +0000 UTC,LastTransitionTime:2020-07-10 10:26:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-19 12:26:28 +0000 UTC,LastTransitionTime:2020-07-10 10:26:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-19 12:26:28 +0000 UTC,LastTransitionTime:2020-07-10 10:26:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-19 12:26:28 +0000 UTC,LastTransitionTime:2020-07-10 10:27:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.10,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:99c90372fb474781bd8fd067d0f8a694,SystemUUID:90b3660f-35fa-4a53-ba21-6c02b53d250b,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:24f49e5e936930e808cd79cac72fd4f2dc87e97b33a9dedecf60d0eb1f655015 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386316854,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:21ea1bbce8747d80fc46c07ee0bdb94653036ee544413853074f39900798a7d8 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360555271,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:ba47a8963e0683886890de11cf65942f3460ec4e2ad313f1e0fe0d144b12969b docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351389939,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:69b5406c3dcf61c95a067571c873b8691dc7cb23b24dbe3749b0a1d2b7c08ca9 docker.io/ollivier/clearwater-homer:latest],SizeBytes:344133365,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:24e9186a8be32af9559f4d198c5c423eaac0d6c7b827c5ab674f2d124385c2fb docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327029020,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:25b1c4759aa4dd92b752451e64f9df5f4a6336d74a15dd5914fbb83ab81ab9f4 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303484988,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:5a833832419bcf25ea1044768038c885ed4bad73225d5d07fc54eebc2a56662b docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298458075,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:6eebbdbc9e424dd87b3d149b9fa1c779ad5c402e2f7ef414ec585a43ebb782d6 docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294998669,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:becf37bf5c8d9f81189d9d727c3c3ab7e032b7de3a710f7bbb264d35d442a344 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287275238,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:83359fb6320eefc0adbf56fcd4eb7a19be2c53dadaa4944a20510cc761536222 docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285335126,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest],SizeBytes:127871601,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04],SizeBytes:46948523,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:7d2da02dca6f486c0f48830ae9d064712a7429523a749953e9fb516ec77637c4 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39175389,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:2bd792eae0d13222bbf7a3641328b2a8cbe80f39c04575d06754e63da6e46cc7 docker.io/aquasec/kube-bench:latest],SizeBytes:8042967,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a16f4454188d4c673716cb0a63d9cc39737e3192482d381d040b6fea1645c35d],SizeBytes:8042939,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul 19 12:28:27.585: INFO: 
Logging kubelet events for node jerma-worker2
Jul 19 12:28:27.587: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2
Jul 19 12:28:27.592: INFO: kube-proxy-67jwf started at 2020-07-10 10:26:32 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.592: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:28:27.592: INFO: kindnet-klj8h started at 2020-07-10 10:26:32 +0000 UTC (0+1 container statuses recorded)
Jul 19 12:28:27.592: INFO: 	Container kindnet-cni ready: true, restart count 0
W0719 12:28:27.595919       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 19 12:28:27.645: INFO: 
Latency metrics for node jerma-worker2
Jul 19 12:28:27.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4500" for this suite.

• Failure [19.842 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353
    should be able to retrieve and filter logs  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Jul 19 12:28:12.695: Expected
        : 2
    to equal
        : 1

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":197,"skipped":3246,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:28:27.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-74492fc2-d188-450b-bd1a-c691b7b3dda5
STEP: Creating secret with name s-test-opt-upd-12ac8ac0-c28a-47b6-9b07-21b67034bbf8
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-74492fc2-d188-450b-bd1a-c691b7b3dda5
STEP: Updating secret s-test-opt-upd-12ac8ac0-c28a-47b6-9b07-21b67034bbf8
STEP: Creating secret with name s-test-opt-create-a92b156a-4bac-4c97-87ee-73ae190dc29f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:28:40.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5875" for this suite.

• [SLOW TEST:13.015 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3250,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:28:40.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 19 12:28:48.068: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:28:48.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4647" for this suite.

• [SLOW TEST:7.577 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3291,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:28:48.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-44fa19ca-41b7-49d4-8861-8fd426a0d9b2
STEP: Creating a pod to test consume configMaps
Jul 19 12:28:48.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d" in namespace "configmap-3883" to be "success or failure"
Jul 19 12:28:48.986: INFO: Pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d": Phase="Pending", Reason="", readiness=false. Elapsed: 77.099402ms
Jul 19 12:28:51.001: INFO: Pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092110724s
Jul 19 12:28:53.004: INFO: Pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d": Phase="Running", Reason="", readiness=true. Elapsed: 4.09483522s
Jul 19 12:28:55.008: INFO: Pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099171435s
STEP: Saw pod success
Jul 19 12:28:55.008: INFO: Pod "pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d" satisfied condition "success or failure"
Jul 19 12:28:55.011: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d container configmap-volume-test: 
STEP: delete the pod
Jul 19 12:28:55.086: INFO: Waiting for pod pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d to disappear
Jul 19 12:28:55.094: INFO: Pod pod-configmaps-d2f47d9f-ed88-43d5-8c86-79cf0b7fb24d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:28:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3883" for this suite.

• [SLOW TEST:6.854 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3291,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:28:55.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1d0b22ce-bcae-4fb1-be06-c3f7b0c9f430
STEP: Creating a pod to test consume secrets
Jul 19 12:28:55.276: INFO: Waiting up to 5m0s for pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6" in namespace "secrets-8459" to be "success or failure"
Jul 19 12:28:55.291: INFO: Pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.380282ms
Jul 19 12:28:57.295: INFO: Pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019390694s
Jul 19 12:28:59.463: INFO: Pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6": Phase="Running", Reason="", readiness=true. Elapsed: 4.186662038s
Jul 19 12:29:01.467: INFO: Pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190732061s
STEP: Saw pod success
Jul 19 12:29:01.467: INFO: Pod "pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6" satisfied condition "success or failure"
Jul 19 12:29:01.470: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6 container secret-volume-test: 
STEP: delete the pod
Jul 19 12:29:01.623: INFO: Waiting for pod pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6 to disappear
Jul 19 12:29:01.689: INFO: Pod pod-secrets-b5e42b63-cd84-4b32-ae6c-377423c6a7e6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:01.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8459" for this suite.
STEP: Destroying namespace "secret-namespace-467" for this suite.

• [SLOW TEST:6.741 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3301,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:01.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:29:01.926: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:29:02.065: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:03.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8073" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":203,"skipped":3331,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:03.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-dfbdfcbe-3667-41c7-aa18-9822de54a38e
STEP: Creating a pod to test consume configMaps
Jul 19 12:29:03.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757" in namespace "projected-4960" to be "success or failure"
Jul 19 12:29:03.586: INFO: Pod "pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757": Phase="Pending", Reason="", readiness=false. Elapsed: 57.721826ms
Jul 19 12:29:05.768: INFO: Pod "pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239943256s
Jul 19 12:29:07.772: INFO: Pod "pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.243753307s
STEP: Saw pod success
Jul 19 12:29:07.772: INFO: Pod "pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757" satisfied condition "success or failure"
Jul 19 12:29:07.776: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 19 12:29:08.017: INFO: Waiting for pod pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757 to disappear
Jul 19 12:29:08.097: INFO: Pod pod-projected-configmaps-889d47e7-fa30-42ee-b557-5eb700839757 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:08.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4960" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3338,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:08.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:29:09.188: INFO: Waiting up to 5m0s for pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6" in namespace "security-context-test-4333" to be "success or failure"
Jul 19 12:29:09.191: INFO: Pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.354508ms
Jul 19 12:29:11.195: INFO: Pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007312375s
Jul 19 12:29:13.229: INFO: Pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041177435s
Jul 19 12:29:15.595: INFO: Pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.40677046s
Jul 19 12:29:15.595: INFO: Pod "busybox-user-65534-27d0e70a-7897-4cd4-bad7-78d371689ca6" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:15.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4333" for this suite.

• [SLOW TEST:7.499 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3381,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:15.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul 19 12:29:16.221: INFO: >>> kubeConfig: /root/.kube/config
Jul 19 12:29:18.773: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:29.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8682" for this suite.

• [SLOW TEST:13.635 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":206,"skipped":3386,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:29.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-e331e486-9b71-4ac7-b75d-14a3a23f6749
STEP: Creating a pod to test consume configMaps
Jul 19 12:29:29.332: INFO: Waiting up to 5m0s for pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2" in namespace "configmap-6312" to be "success or failure"
Jul 19 12:29:29.351: INFO: Pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.160612ms
Jul 19 12:29:31.391: INFO: Pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059125912s
Jul 19 12:29:33.395: INFO: Pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2": Phase="Running", Reason="", readiness=true. Elapsed: 4.063150015s
Jul 19 12:29:35.421: INFO: Pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089070191s
STEP: Saw pod success
Jul 19 12:29:35.421: INFO: Pod "pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2" satisfied condition "success or failure"
Jul 19 12:29:35.424: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2 container configmap-volume-test: 
STEP: delete the pod
Jul 19 12:29:35.460: INFO: Waiting for pod pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2 to disappear
Jul 19 12:29:35.478: INFO: Pod pod-configmaps-ceb5397d-c7c4-457c-85f8-e1ae4f5130d2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:29:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6312" for this suite.

• [SLOW TEST:6.246 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3393,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:29:35.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-15ae45df-9375-43bc-8743-6c81a7048a6b in namespace container-probe-443
Jul 19 12:29:39.742: INFO: Started pod test-webserver-15ae45df-9375-43bc-8743-6c81a7048a6b in namespace container-probe-443
STEP: checking the pod's current state and verifying that restartCount is present
Jul 19 12:29:39.745: INFO: Initial restart count of pod test-webserver-15ae45df-9375-43bc-8743-6c81a7048a6b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:33:41.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-443" for this suite.

• [SLOW TEST:246.718 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3406,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:33:42.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-4864
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4864
STEP: Deleting pre-stop pod
Jul 19 12:34:04.154: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:34:04.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4864" for this suite.

• [SLOW TEST:21.973 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":209,"skipped":3420,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:34:04.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6534
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 19 12:34:04.256: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 19 12:34:28.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.66:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6534 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 19 12:34:28.558: INFO: >>> kubeConfig: /root/.kube/config
I0719 12:34:28.597092       6 log.go:172] (0xc000b5c2c0) (0xc002303ae0) Create stream
I0719 12:34:28.597125       6 log.go:172] (0xc000b5c2c0) (0xc002303ae0) Stream added, broadcasting: 1
I0719 12:34:28.598806       6 log.go:172] (0xc000b5c2c0) Reply frame received for 1
I0719 12:34:28.598845       6 log.go:172] (0xc000b5c2c0) (0xc001f90be0) Create stream
I0719 12:34:28.598860       6 log.go:172] (0xc000b5c2c0) (0xc001f90be0) Stream added, broadcasting: 3
I0719 12:34:28.599713       6 log.go:172] (0xc000b5c2c0) Reply frame received for 3
I0719 12:34:28.599737       6 log.go:172] (0xc000b5c2c0) (0xc002303b80) Create stream
I0719 12:34:28.599743       6 log.go:172] (0xc000b5c2c0) (0xc002303b80) Stream added, broadcasting: 5
I0719 12:34:28.600631       6 log.go:172] (0xc000b5c2c0) Reply frame received for 5
I0719 12:34:28.671413       6 log.go:172] (0xc000b5c2c0) Data frame received for 3
I0719 12:34:28.671448       6 log.go:172] (0xc001f90be0) (3) Data frame handling
I0719 12:34:28.671463       6 log.go:172] (0xc001f90be0) (3) Data frame sent
I0719 12:34:28.671482       6 log.go:172] (0xc000b5c2c0) Data frame received for 3
I0719 12:34:28.671509       6 log.go:172] (0xc001f90be0) (3) Data frame handling
I0719 12:34:28.671555       6 log.go:172] (0xc000b5c2c0) Data frame received for 5
I0719 12:34:28.671569       6 log.go:172] (0xc002303b80) (5) Data frame handling
I0719 12:34:28.673530       6 log.go:172] (0xc000b5c2c0) Data frame received for 1
I0719 12:34:28.673600       6 log.go:172] (0xc002303ae0) (1) Data frame handling
I0719 12:34:28.673630       6 log.go:172] (0xc002303ae0) (1) Data frame sent
I0719 12:34:28.673651       6 log.go:172] (0xc000b5c2c0) (0xc002303ae0) Stream removed, broadcasting: 1
I0719 12:34:28.673677       6 log.go:172] (0xc000b5c2c0) Go away received
I0719 12:34:28.673785       6 log.go:172] (0xc000b5c2c0) (0xc002303ae0) Stream removed, broadcasting: 1
I0719 12:34:28.673810       6 log.go:172] (0xc000b5c2c0) (0xc001f90be0) Stream removed, broadcasting: 3
I0719 12:34:28.673821       6 log.go:172] (0xc000b5c2c0) (0xc002303b80) Stream removed, broadcasting: 5
Jul 19 12:34:28.673: INFO: Found all expected endpoints: [netserver-0]
Jul 19 12:34:28.677: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6534 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 19 12:34:28.677: INFO: >>> kubeConfig: /root/.kube/config
I0719 12:34:28.706271       6 log.go:172] (0xc002d94210) (0xc001382640) Create stream
I0719 12:34:28.706308       6 log.go:172] (0xc002d94210) (0xc001382640) Stream added, broadcasting: 1
I0719 12:34:28.708349       6 log.go:172] (0xc002d94210) Reply frame received for 1
I0719 12:34:28.708387       6 log.go:172] (0xc002d94210) (0xc001e72000) Create stream
I0719 12:34:28.708401       6 log.go:172] (0xc002d94210) (0xc001e72000) Stream added, broadcasting: 3
I0719 12:34:28.709552       6 log.go:172] (0xc002d94210) Reply frame received for 3
I0719 12:34:28.709584       6 log.go:172] (0xc002d94210) (0xc001f90dc0) Create stream
I0719 12:34:28.709596       6 log.go:172] (0xc002d94210) (0xc001f90dc0) Stream added, broadcasting: 5
I0719 12:34:28.710462       6 log.go:172] (0xc002d94210) Reply frame received for 5
I0719 12:34:28.763217       6 log.go:172] (0xc002d94210) Data frame received for 5
I0719 12:34:28.763449       6 log.go:172] (0xc001f90dc0) (5) Data frame handling
I0719 12:34:28.763537       6 log.go:172] (0xc002d94210) Data frame received for 3
I0719 12:34:28.763562       6 log.go:172] (0xc001e72000) (3) Data frame handling
I0719 12:34:28.763574       6 log.go:172] (0xc001e72000) (3) Data frame sent
I0719 12:34:28.763580       6 log.go:172] (0xc002d94210) Data frame received for 3
I0719 12:34:28.763584       6 log.go:172] (0xc001e72000) (3) Data frame handling
I0719 12:34:28.765482       6 log.go:172] (0xc002d94210) Data frame received for 1
I0719 12:34:28.765498       6 log.go:172] (0xc001382640) (1) Data frame handling
I0719 12:34:28.765514       6 log.go:172] (0xc001382640) (1) Data frame sent
I0719 12:34:28.765529       6 log.go:172] (0xc002d94210) (0xc001382640) Stream removed, broadcasting: 1
I0719 12:34:28.765581       6 log.go:172] (0xc002d94210) Go away received
I0719 12:34:28.765621       6 log.go:172] (0xc002d94210) (0xc001382640) Stream removed, broadcasting: 1
I0719 12:34:28.765633       6 log.go:172] (0xc002d94210) (0xc001e72000) Stream removed, broadcasting: 3
I0719 12:34:28.765642       6 log.go:172] (0xc002d94210) (0xc001f90dc0) Stream removed, broadcasting: 5
Jul 19 12:34:28.765: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:34:28.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6534" for this suite.

• [SLOW TEST:24.595 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3431,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:34:28.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:34:29.126: INFO: Creating ReplicaSet my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592
Jul 19 12:34:29.182: INFO: Pod name my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592: Found 0 pods out of 1
Jul 19 12:34:34.203: INFO: Pod name my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592: Found 1 pods out of 1
Jul 19 12:34:34.203: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592" is running
Jul 19 12:34:34.294: INFO: Pod "my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592-7s2zh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 12:34:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 12:34:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 12:34:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 12:34:29 +0000 UTC Reason: Message:}])
Jul 19 12:34:34.294: INFO: Trying to dial the pod
Jul 19 12:34:39.330: INFO: Controller my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592: Got expected result from replica 1 [my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592-7s2zh]: "my-hostname-basic-0ceecf52-108e-4405-8104-e033195bd592-7s2zh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:34:39.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1204" for this suite.

• [SLOW TEST:10.568 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":211,"skipped":3462,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:34:39.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 19 12:34:39.576: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426717 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 19 12:34:39.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426718 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 19 12:34:39.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426719 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 19 12:34:49.649: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426763 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 19 12:34:49.649: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426764 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul 19 12:34:49.649: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7827 /api/v1/namespaces/watch-7827/configmaps/e2e-watch-test-label-changed 00d76224-fb90-4fea-8f7a-f4c8b73377ed 2426765 0 2020-07-19 12:34:39 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:34:49.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7827" for this suite.

• [SLOW TEST:10.321 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":212,"skipped":3465,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:34:49.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:34:53.907: INFO: Waiting up to 5m0s for pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839" in namespace "pods-842" to be "success or failure"
Jul 19 12:34:53.915: INFO: Pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839": Phase="Pending", Reason="", readiness=false. Elapsed: 7.576678ms
Jul 19 12:34:55.919: INFO: Pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011805439s
Jul 19 12:34:58.068: INFO: Pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160287029s
Jul 19 12:35:00.157: INFO: Pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249423079s
STEP: Saw pod success
Jul 19 12:35:00.157: INFO: Pod "client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839" satisfied condition "success or failure"
Jul 19 12:35:00.160: INFO: Trying to get logs from node jerma-worker pod client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839 container env3cont: 
STEP: delete the pod
Jul 19 12:35:00.308: INFO: Waiting for pod client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839 to disappear
Jul 19 12:35:00.353: INFO: Pod client-envvars-33de58a2-2bee-41e5-becf-6eb71e0aa839 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:00.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-842" for this suite.

• [SLOW TEST:10.694 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3512,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:00.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:19.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9343" for this suite.

• [SLOW TEST:18.678 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":214,"skipped":3522,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:19.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7639" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":215,"skipped":3529,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:19.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5473
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5473
STEP: Creating statefulset with conflicting port in namespace statefulset-5473
STEP: Waiting until pod test-pod will start running in namespace statefulset-5473
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5473
Jul 19 12:35:28.144: INFO: Observed stateful pod in namespace: statefulset-5473, name: ss-0, uid: e5eed33e-ec43-4889-bb66-f57cde961446, status phase: Pending. Waiting for statefulset controller to delete.
Jul 19 12:35:28.889: INFO: Observed stateful pod in namespace: statefulset-5473, name: ss-0, uid: e5eed33e-ec43-4889-bb66-f57cde961446, status phase: Failed. Waiting for statefulset controller to delete.
Jul 19 12:35:28.930: INFO: Observed stateful pod in namespace: statefulset-5473, name: ss-0, uid: e5eed33e-ec43-4889-bb66-f57cde961446, status phase: Failed. Waiting for statefulset controller to delete.
Jul 19 12:35:28.986: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5473
STEP: Removing pod with conflicting port in namespace statefulset-5473
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5473 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 19 12:35:35.560: INFO: Deleting all statefulset in ns statefulset-5473
Jul 19 12:35:35.575: INFO: Scaling statefulset ss to 0
Jul 19 12:35:45.918: INFO: Waiting for statefulset status.replicas updated to 0
Jul 19 12:35:45.920: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:45.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5473" for this suite.

• [SLOW TEST:26.442 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":216,"skipped":3546,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:45.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-5db2e2ce-7367-4fa4-8ab2-20aa98a47bb9
STEP: Creating a pod to test consume secrets
Jul 19 12:35:46.091: INFO: Waiting up to 5m0s for pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431" in namespace "secrets-7682" to be "success or failure"
Jul 19 12:35:46.136: INFO: Pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431": Phase="Pending", Reason="", readiness=false. Elapsed: 44.805194ms
Jul 19 12:35:48.171: INFO: Pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079373226s
Jul 19 12:35:50.175: INFO: Pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083441692s
Jul 19 12:35:52.403: INFO: Pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.311502891s
STEP: Saw pod success
Jul 19 12:35:52.403: INFO: Pod "pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431" satisfied condition "success or failure"
Jul 19 12:35:52.415: INFO: Trying to get logs from node jerma-worker pod pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431 container secret-volume-test: 
STEP: delete the pod
Jul 19 12:35:52.440: INFO: Waiting for pod pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431 to disappear
Jul 19 12:35:52.446: INFO: Pod pod-secrets-657ef61e-6917-4720-89e8-e7f43e741431 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:52.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7682" for this suite.

• [SLOW TEST:6.489 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3559,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:52.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-c6c9a19f-e132-423d-8757-b4ffa443b1ed
STEP: Creating a pod to test consume secrets
Jul 19 12:35:53.302: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b" in namespace "projected-5170" to be "success or failure"
Jul 19 12:35:53.320: INFO: Pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115027ms
Jul 19 12:35:55.344: INFO: Pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042419192s
Jul 19 12:35:57.348: INFO: Pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.046434432s
Jul 19 12:35:59.353: INFO: Pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050541046s
STEP: Saw pod success
Jul 19 12:35:59.353: INFO: Pod "pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b" satisfied condition "success or failure"
Jul 19 12:35:59.355: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b container projected-secret-volume-test: 
STEP: delete the pod
Jul 19 12:35:59.496: INFO: Waiting for pod pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b to disappear
Jul 19 12:35:59.506: INFO: Pod pod-projected-secrets-ba835e54-13f8-4510-86e5-626e52e24a4b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:35:59.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5170" for this suite.

• [SLOW TEST:7.098 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3588,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:35:59.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5058.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5058.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5058.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5058.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5058.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5058.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 19 12:36:10.046: INFO: DNS probes using dns-5058/dns-test-c0597c4a-298b-47d6-b66c-ea0ed0af9117 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:36:10.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5058" for this suite.

• [SLOW TEST:10.720 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":219,"skipped":3623,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:36:10.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jul 19 12:36:10.682: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix988485671/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:36:10.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7596" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":220,"skipped":3648,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:36:10.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 19 12:36:25.445: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:25.453: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:27.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:27.457: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:29.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:29.456: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:31.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:31.457: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:33.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:33.457: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:35.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:35.457: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:37.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:37.535: INFO: Pod pod-with-prestop-http-hook still exists
Jul 19 12:36:39.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 19 12:36:39.457: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:36:39.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8839" for this suite.

• [SLOW TEST:28.643 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3664,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:36:39.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul 19 12:36:39.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5766'
Jul 19 12:36:39.860: INFO: stderr: ""
Jul 19 12:36:39.861: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 19 12:36:40.882: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:40.882: INFO: Found 0 / 1
Jul 19 12:36:41.865: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:41.865: INFO: Found 0 / 1
Jul 19 12:36:42.865: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:42.865: INFO: Found 0 / 1
Jul 19 12:36:43.865: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:43.865: INFO: Found 1 / 1
Jul 19 12:36:43.865: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 19 12:36:43.868: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:43.868: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 19 12:36:43.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-5s7ql --namespace=kubectl-5766 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 19 12:36:43.960: INFO: stderr: ""
Jul 19 12:36:43.960: INFO: stdout: "pod/agnhost-master-5s7ql patched\n"
STEP: checking annotations
Jul 19 12:36:44.138: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 19 12:36:44.138: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:36:44.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5766" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":222,"skipped":3667,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:36:44.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 19 12:36:51.370: INFO: Successfully updated pod "labelsupdate1f2019f0-77c1-463a-b267-0cbb8e907463"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:36:53.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-759" for this suite.

• [SLOW TEST:9.614 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3670,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:36:53.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jul 19 12:36:54.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8625'
Jul 19 12:36:54.557: INFO: stderr: ""
Jul 19 12:36:54.557: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 19 12:36:54.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8625'
Jul 19 12:36:54.665: INFO: stderr: ""
Jul 19 12:36:54.665: INFO: stdout: "update-demo-nautilus-qsstm update-demo-nautilus-zfr58 "
Jul 19 12:36:54.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qsstm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:36:54.781: INFO: stderr: ""
Jul 19 12:36:54.781: INFO: stdout: ""
Jul 19 12:36:54.781: INFO: update-demo-nautilus-qsstm is created but not running
Jul 19 12:36:59.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8625'
Jul 19 12:36:59.993: INFO: stderr: ""
Jul 19 12:36:59.993: INFO: stdout: "update-demo-nautilus-qsstm update-demo-nautilus-zfr58 "
Jul 19 12:36:59.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qsstm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:00.227: INFO: stderr: ""
Jul 19 12:37:00.227: INFO: stdout: ""
Jul 19 12:37:00.227: INFO: update-demo-nautilus-qsstm is created but not running
Jul 19 12:37:05.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8625'
Jul 19 12:37:05.329: INFO: stderr: ""
Jul 19 12:37:05.329: INFO: stdout: "update-demo-nautilus-qsstm update-demo-nautilus-zfr58 "
Jul 19 12:37:05.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qsstm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:05.418: INFO: stderr: ""
Jul 19 12:37:05.418: INFO: stdout: "true"
Jul 19 12:37:05.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qsstm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:05.540: INFO: stderr: ""
Jul 19 12:37:05.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 19 12:37:05.541: INFO: validating pod update-demo-nautilus-qsstm
Jul 19 12:37:05.544: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 19 12:37:05.544: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 19 12:37:05.544: INFO: update-demo-nautilus-qsstm is verified up and running
Jul 19 12:37:05.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfr58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:05.632: INFO: stderr: ""
Jul 19 12:37:05.632: INFO: stdout: "true"
Jul 19 12:37:05.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfr58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:05.855: INFO: stderr: ""
Jul 19 12:37:05.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 19 12:37:05.855: INFO: validating pod update-demo-nautilus-zfr58
Jul 19 12:37:05.858: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 19 12:37:05.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 19 12:37:05.858: INFO: update-demo-nautilus-zfr58 is verified up and running
STEP: rolling-update to new replication controller
Jul 19 12:37:05.860: INFO: scanned /root for discovery docs: 
Jul 19 12:37:05.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8625'
Jul 19 12:37:31.356: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 19 12:37:31.356: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 19 12:37:31.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8625'
Jul 19 12:37:31.470: INFO: stderr: ""
Jul 19 12:37:31.470: INFO: stdout: "update-demo-kitten-7vpnn update-demo-kitten-xdqlv "
Jul 19 12:37:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vpnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:31.570: INFO: stderr: ""
Jul 19 12:37:31.570: INFO: stdout: "true"
Jul 19 12:37:31.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vpnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:31.660: INFO: stderr: ""
Jul 19 12:37:31.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 19 12:37:31.660: INFO: validating pod update-demo-kitten-7vpnn
Jul 19 12:37:31.663: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 19 12:37:31.663: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 19 12:37:31.663: INFO: update-demo-kitten-7vpnn is verified up and running
Jul 19 12:37:31.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xdqlv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:31.748: INFO: stderr: ""
Jul 19 12:37:31.748: INFO: stdout: "true"
Jul 19 12:37:31.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xdqlv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8625'
Jul 19 12:37:31.831: INFO: stderr: ""
Jul 19 12:37:31.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 19 12:37:31.831: INFO: validating pod update-demo-kitten-xdqlv
Jul 19 12:37:31.834: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 19 12:37:31.834: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 19 12:37:31.834: INFO: update-demo-kitten-xdqlv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:37:31.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8625" for this suite.

• [SLOW TEST:38.081 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":224,"skipped":3703,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:37:31.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-ead5b8e5-015a-4807-9605-8726459aa410
STEP: Creating a pod to test consume configMaps
Jul 19 12:37:32.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0" in namespace "projected-9271" to be "success or failure"
Jul 19 12:37:32.254: INFO: Pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18364ms
Jul 19 12:37:34.399: INFO: Pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151212394s
Jul 19 12:37:36.403: INFO: Pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155052942s
Jul 19 12:37:38.823: INFO: Pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575793009s
STEP: Saw pod success
Jul 19 12:37:38.823: INFO: Pod "pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0" satisfied condition "success or failure"
Jul 19 12:37:38.826: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 19 12:37:39.210: INFO: Waiting for pod pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0 to disappear
Jul 19 12:37:39.304: INFO: Pod pod-projected-configmaps-300d16dd-20a3-4179-97fd-b1b185cfbfd0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:37:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9271" for this suite.

• [SLOW TEST:7.736 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3759,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:37:39.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-3108f8d5-4c1f-40fc-9a72-fb7163bfadd1
STEP: Creating a pod to test consume configMaps
Jul 19 12:37:40.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316" in namespace "configmap-5990" to be "success or failure"
Jul 19 12:37:40.734: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316": Phase="Pending", Reason="", readiness=false. Elapsed: 83.813646ms
Jul 19 12:37:42.811: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16148233s
Jul 19 12:37:44.911: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261288654s
Jul 19 12:37:46.997: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316": Phase="Running", Reason="", readiness=true. Elapsed: 6.347228394s
Jul 19 12:37:49.001: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.350612896s
STEP: Saw pod success
Jul 19 12:37:49.001: INFO: Pod "pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316" satisfied condition "success or failure"
Jul 19 12:37:49.003: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316 container configmap-volume-test: 
STEP: delete the pod
Jul 19 12:37:49.143: INFO: Waiting for pod pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316 to disappear
Jul 19 12:37:49.192: INFO: Pod pod-configmaps-b04e6e23-e916-400d-8d5d-8f482120e316 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:37:49.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5990" for this suite.

• [SLOW TEST:9.624 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3774,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:37:49.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-348/configmap-test-d69537ef-1392-4234-ab94-f18209afe69a
STEP: Creating a pod to test consume configMaps
Jul 19 12:37:49.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee" in namespace "configmap-348" to be "success or failure"
Jul 19 12:37:49.569: INFO: Pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee": Phase="Pending", Reason="", readiness=false. Elapsed: 19.041882ms
Jul 19 12:37:51.710: INFO: Pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160436792s
Jul 19 12:37:53.715: INFO: Pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164795537s
Jul 19 12:37:55.718: INFO: Pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168413314s
STEP: Saw pod success
Jul 19 12:37:55.718: INFO: Pod "pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee" satisfied condition "success or failure"
Jul 19 12:37:55.721: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee container env-test: 
STEP: delete the pod
Jul 19 12:37:55.821: INFO: Waiting for pod pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee to disappear
Jul 19 12:37:55.826: INFO: Pod pod-configmaps-67216ef6-8040-4a6b-9af7-9db95ac583ee no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:37:55.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-348" for this suite.

• [SLOW TEST:6.632 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3775,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:37:55.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul 19 12:37:55.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6140'
Jul 19 12:38:03.956: INFO: stderr: ""
Jul 19 12:38:03.956: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 19 12:38:03.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6140'
Jul 19 12:38:04.883: INFO: stderr: ""
Jul 19 12:38:04.883: INFO: stdout: "update-demo-nautilus-8kzwn update-demo-nautilus-jbkwj "
Jul 19 12:38:04.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kzwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6140'
Jul 19 12:38:04.969: INFO: stderr: ""
Jul 19 12:38:04.969: INFO: stdout: ""
Jul 19 12:38:04.969: INFO: update-demo-nautilus-8kzwn is created but not running
Jul 19 12:38:09.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6140'
Jul 19 12:38:10.219: INFO: stderr: ""
Jul 19 12:38:10.219: INFO: stdout: "update-demo-nautilus-8kzwn update-demo-nautilus-jbkwj "
Jul 19 12:38:10.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kzwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6140'
Jul 19 12:38:10.469: INFO: stderr: ""
Jul 19 12:38:10.469: INFO: stdout: "true"
Jul 19 12:38:10.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kzwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6140'
Jul 19 12:38:10.765: INFO: stderr: ""
Jul 19 12:38:10.766: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 19 12:38:10.766: INFO: validating pod update-demo-nautilus-8kzwn
Jul 19 12:38:10.807: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 19 12:38:10.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 19 12:38:10.807: INFO: update-demo-nautilus-8kzwn is verified up and running
Jul 19 12:38:10.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jbkwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6140'
Jul 19 12:38:11.058: INFO: stderr: ""
Jul 19 12:38:11.058: INFO: stdout: "true"
Jul 19 12:38:11.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jbkwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6140'
Jul 19 12:38:11.148: INFO: stderr: ""
Jul 19 12:38:11.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 19 12:38:11.148: INFO: validating pod update-demo-nautilus-jbkwj
Jul 19 12:38:11.152: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 19 12:38:11.152: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 19 12:38:11.152: INFO: update-demo-nautilus-jbkwj is verified up and running
STEP: using delete to clean up resources
Jul 19 12:38:11.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6140'
Jul 19 12:38:11.487: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 19 12:38:11.487: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 19 12:38:11.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6140'
Jul 19 12:38:11.749: INFO: stderr: "No resources found in kubectl-6140 namespace.\n"
Jul 19 12:38:11.749: INFO: stdout: ""
Jul 19 12:38:11.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6140 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 19 12:38:11.879: INFO: stderr: ""
Jul 19 12:38:11.879: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:11.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6140" for this suite.

• [SLOW TEST:16.056 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":228,"skipped":3775,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:11.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:23.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7889" for this suite.

• [SLOW TEST:11.988 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":229,"skipped":3801,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:23.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jul 19 12:38:23.918: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:23.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3878" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":230,"skipped":3814,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:24.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 19 12:38:30.697: INFO: Successfully updated pod "pod-update-1f1901c4-afb0-4293-bd8e-26901eedffea"
STEP: verifying the updated pod is in kubernetes
Jul 19 12:38:30.911: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:30.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2701" for this suite.

• [SLOW TEST:6.912 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3814,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:30.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 19 12:38:37.954: INFO: Successfully updated pod "labelsupdate15d10f85-785e-4bb3-b1a0-0baaa27220fc"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:40.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5299" for this suite.

• [SLOW TEST:9.170 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3871,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:40.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jul 19 12:38:40.291: INFO: Waiting up to 5m0s for pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711" in namespace "containers-5597" to be "success or failure"
Jul 19 12:38:40.324: INFO: Pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711": Phase="Pending", Reason="", readiness=false. Elapsed: 32.575538ms
Jul 19 12:38:42.337: INFO: Pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045560191s
Jul 19 12:38:44.341: INFO: Pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049607255s
Jul 19 12:38:46.391: INFO: Pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099655731s
STEP: Saw pod success
Jul 19 12:38:46.391: INFO: Pod "client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711" satisfied condition "success or failure"
Jul 19 12:38:46.394: INFO: Trying to get logs from node jerma-worker pod client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711 container test-container: 
STEP: delete the pod
Jul 19 12:38:46.416: INFO: Waiting for pod client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711 to disappear
Jul 19 12:38:46.426: INFO: Pod client-containers-3b7acbcb-b8f6-46dd-951b-3d0a784ad711 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:38:46.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5597" for this suite.

• [SLOW TEST:6.392 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3888,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:38:46.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 19 12:38:46.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4535'
Jul 19 12:38:46.874: INFO: stderr: ""
Jul 19 12:38:46.874: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul 19 12:38:51.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4535 -o json'
Jul 19 12:38:52.009: INFO: stderr: ""
Jul 19 12:38:52.009: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-19T12:38:46Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4535\",\n        \"resourceVersion\": \"2428330\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4535/pods/e2e-test-httpd-pod\",\n        \"uid\": \"4884fec5-e1ed-4073-be8d-93afdc32f409\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8m56s\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8m56s\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8m56s\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-19T12:38:47Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-19T12:38:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-19T12:38:50Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-19T12:38:46Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://63d6f55f662265124e1282aeb5f9afa3942d35fd182759541271d747ae0a0f3f\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-19T12:38:49Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.10\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.31\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.31\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-19T12:38:47Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul 19 12:38:52.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4535'
Jul 19 12:38:52.262: INFO: stderr: ""
Jul 19 12:38:52.262: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Jul 19 12:38:52.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4535'
Jul 19 12:39:07.365: INFO: stderr: ""
Jul 19 12:39:07.365: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:07.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4535" for this suite.

• [SLOW TEST:20.893 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":234,"skipped":3888,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:07.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:39:07.468: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1e4cff3e-ea5c-4706-b45b-39168960abf0" in namespace "security-context-test-7964" to be "success or failure"
Jul 19 12:39:07.471: INFO: Pod "alpine-nnp-false-1e4cff3e-ea5c-4706-b45b-39168960abf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.534366ms
Jul 19 12:39:09.476: INFO: Pod "alpine-nnp-false-1e4cff3e-ea5c-4706-b45b-39168960abf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007679276s
Jul 19 12:39:11.480: INFO: Pod "alpine-nnp-false-1e4cff3e-ea5c-4706-b45b-39168960abf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011670601s
Jul 19 12:39:11.480: INFO: Pod "alpine-nnp-false-1e4cff3e-ea5c-4706-b45b-39168960abf0" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:11.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7964" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3894,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:11.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jul 19 12:39:12.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9976 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 19 12:39:17.858: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0719 12:39:17.544954    3367 log.go:172] (0xc0003c00b0) (0xc000764280) Create stream\nI0719 12:39:17.545008    3367 log.go:172] (0xc0003c00b0) (0xc000764280) Stream added, broadcasting: 1\nI0719 12:39:17.546894    3367 log.go:172] (0xc0003c00b0) Reply frame received for 1\nI0719 12:39:17.546920    3367 log.go:172] (0xc0003c00b0) (0xc000764320) Create stream\nI0719 12:39:17.546935    3367 log.go:172] (0xc0003c00b0) (0xc000764320) Stream added, broadcasting: 3\nI0719 12:39:17.547627    3367 log.go:172] (0xc0003c00b0) Reply frame received for 3\nI0719 12:39:17.547671    3367 log.go:172] (0xc0003c00b0) (0xc000724000) Create stream\nI0719 12:39:17.547682    3367 log.go:172] (0xc0003c00b0) (0xc000724000) Stream added, broadcasting: 5\nI0719 12:39:17.548258    3367 log.go:172] (0xc0003c00b0) Reply frame received for 5\nI0719 12:39:17.548289    3367 log.go:172] (0xc0003c00b0) (0xc000726000) Create stream\nI0719 12:39:17.548300    3367 log.go:172] (0xc0003c00b0) (0xc000726000) Stream added, broadcasting: 7\nI0719 12:39:17.548943    3367 log.go:172] (0xc0003c00b0) Reply frame received for 7\nI0719 12:39:17.549084    3367 log.go:172] (0xc000764320) (3) Writing data frame\nI0719 12:39:17.549168    3367 log.go:172] (0xc000764320) (3) Writing data frame\nI0719 12:39:17.549821    3367 log.go:172] (0xc0003c00b0) Data frame received for 5\nI0719 12:39:17.549841    3367 log.go:172] (0xc000724000) (5) Data frame handling\nI0719 12:39:17.549851    3367 log.go:172] (0xc000724000) (5) Data frame sent\nI0719 12:39:17.550308    3367 log.go:172] (0xc0003c00b0) Data frame received for 5\nI0719 12:39:17.550321    3367 log.go:172] (0xc000724000) (5) Data frame handling\nI0719 12:39:17.550336    3367 log.go:172] (0xc000724000) (5) Data frame sent\nI0719 12:39:17.582674    3367 log.go:172] (0xc0003c00b0) Data frame received for 7\nI0719 12:39:17.582717    3367 log.go:172] (0xc000726000) (7) Data frame handling\nI0719 12:39:17.582803    3367 log.go:172] (0xc0003c00b0) Data frame received for 5\nI0719 12:39:17.582873    3367 log.go:172] (0xc000724000) (5) Data frame handling\nI0719 12:39:17.583150    3367 log.go:172] (0xc0003c00b0) (0xc000764320) Stream removed, broadcasting: 3\nI0719 12:39:17.583185    3367 log.go:172] (0xc0003c00b0) Data frame received for 1\nI0719 12:39:17.583195    3367 log.go:172] (0xc000764280) (1) Data frame handling\nI0719 12:39:17.583206    3367 log.go:172] (0xc000764280) (1) Data frame sent\nI0719 12:39:17.583216    3367 log.go:172] (0xc0003c00b0) (0xc000764280) Stream removed, broadcasting: 1\nI0719 12:39:17.583403    3367 log.go:172] (0xc0003c00b0) Go away received\nI0719 12:39:17.583639    3367 log.go:172] (0xc0003c00b0) (0xc000764280) Stream removed, broadcasting: 1\nI0719 12:39:17.583659    3367 log.go:172] (0xc0003c00b0) (0xc000764320) Stream removed, broadcasting: 3\nI0719 12:39:17.583670    3367 log.go:172] (0xc0003c00b0) (0xc000724000) Stream removed, broadcasting: 5\nI0719 12:39:17.583685    3367 log.go:172] (0xc0003c00b0) (0xc000726000) Stream removed, broadcasting: 7\n"
Jul 19 12:39:17.858: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:19.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9976" for this suite.

• [SLOW TEST:8.378 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":236,"skipped":3907,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:19.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:39:19.969: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6" in namespace "downward-api-7681" to be "success or failure"
Jul 19 12:39:19.990: INFO: Pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.798697ms
Jul 19 12:39:22.142: INFO: Pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173544477s
Jul 19 12:39:24.196: INFO: Pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6": Phase="Running", Reason="", readiness=true. Elapsed: 4.227709087s
Jul 19 12:39:26.200: INFO: Pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.23119041s
STEP: Saw pod success
Jul 19 12:39:26.200: INFO: Pod "downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6" satisfied condition "success or failure"
Jul 19 12:39:26.202: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6 container client-container: 
STEP: delete the pod
Jul 19 12:39:26.219: INFO: Waiting for pod downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6 to disappear
Jul 19 12:39:26.236: INFO: Pod downwardapi-volume-3a8264a0-d1ca-4c62-a6cc-1775296831a6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:26.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7681" for this suite.

• [SLOW TEST:6.369 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3914,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:26.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:39:26.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 19 12:39:28.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7835 create -f -'
Jul 19 12:39:40.113: INFO: stderr: ""
Jul 19 12:39:40.113: INFO: stdout: "e2e-test-crd-publish-openapi-223-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 19 12:39:40.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7835 delete e2e-test-crd-publish-openapi-223-crds test-cr'
Jul 19 12:39:40.212: INFO: stderr: ""
Jul 19 12:39:40.212: INFO: stdout: "e2e-test-crd-publish-openapi-223-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul 19 12:39:40.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7835 apply -f -'
Jul 19 12:39:40.485: INFO: stderr: ""
Jul 19 12:39:40.485: INFO: stdout: "e2e-test-crd-publish-openapi-223-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 19 12:39:40.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7835 delete e2e-test-crd-publish-openapi-223-crds test-cr'
Jul 19 12:39:40.578: INFO: stderr: ""
Jul 19 12:39:40.578: INFO: stdout: "e2e-test-crd-publish-openapi-223-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul 19 12:39:40.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-223-crds'
Jul 19 12:39:40.871: INFO: stderr: ""
Jul 19 12:39:40.871: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-223-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:43.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7835" for this suite.

• [SLOW TEST:17.843 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":238,"skipped":3914,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:44.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:39:45.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:39:47.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:39:49.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759185, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:39:52.082: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:39:52.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:39:53.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1277" for this suite.
STEP: Destroying namespace "webhook-1277-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.695 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":239,"skipped":3915,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:39:53.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:40:31.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7062" for this suite.

• [SLOW TEST:37.418 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3918,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:40:31.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 19 12:40:31.612: INFO: Waiting up to 5m0s for pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e" in namespace "emptydir-9250" to be "success or failure"
Jul 19 12:40:31.647: INFO: Pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.378899ms
Jul 19 12:40:33.651: INFO: Pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039174307s
Jul 19 12:40:35.655: INFO: Pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042924564s
Jul 19 12:40:37.664: INFO: Pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052342651s
STEP: Saw pod success
Jul 19 12:40:37.664: INFO: Pod "pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e" satisfied condition "success or failure"
Jul 19 12:40:37.668: INFO: Trying to get logs from node jerma-worker pod pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e container test-container: 
STEP: delete the pod
Jul 19 12:40:37.695: INFO: Waiting for pod pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e to disappear
Jul 19 12:40:37.699: INFO: Pod pod-d6df1ce1-2279-4a8a-a5e2-0bffc7aca84e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:40:37.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9250" for this suite.

• [SLOW TEST:6.506 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3938,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:40:37.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:40:38.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:40:40.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759238, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759238, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759238, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759238, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:40:43.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:40:55.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4974" for this suite.
STEP: Destroying namespace "webhook-4974-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.033 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":242,"skipped":3949,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:40:55.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 19 12:40:56.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a" in namespace "projected-7119" to be "success or failure"
Jul 19 12:40:56.006: INFO: Pod "downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.657763ms
Jul 19 12:40:58.210: INFO: Pod "downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209934222s
Jul 19 12:41:00.214: INFO: Pod "downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214230893s
STEP: Saw pod success
Jul 19 12:41:00.214: INFO: Pod "downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a" satisfied condition "success or failure"
Jul 19 12:41:00.217: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a container client-container: 
STEP: delete the pod
Jul 19 12:41:00.582: INFO: Waiting for pod downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a to disappear
Jul 19 12:41:00.706: INFO: Pod downwardapi-volume-61544ba2-48a5-4329-8bf8-5bcf47b82b4a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:00.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7119" for this suite.

• [SLOW TEST:5.004 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3954,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:00.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul 19 12:41:00.888: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6968" for this suite.

• [SLOW TEST:8.949 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":244,"skipped":3961,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:09.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-badab2b3-0326-4884-ac81-d54e572cc20b
STEP: Creating a pod to test consume configMaps
Jul 19 12:41:09.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3" in namespace "configmap-3515" to be "success or failure"
Jul 19 12:41:09.784: INFO: Pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.865984ms
Jul 19 12:41:11.788: INFO: Pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023131121s
Jul 19 12:41:14.566: INFO: Pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3": Phase="Running", Reason="", readiness=true. Elapsed: 4.801856696s
Jul 19 12:41:16.581: INFO: Pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.816523565s
STEP: Saw pod success
Jul 19 12:41:16.581: INFO: Pod "pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3" satisfied condition "success or failure"
Jul 19 12:41:16.598: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3 container configmap-volume-test: 
STEP: delete the pod
Jul 19 12:41:16.778: INFO: Waiting for pod pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3 to disappear
Jul 19 12:41:16.784: INFO: Pod pod-configmaps-807b9f93-c84f-4373-bcf8-81c1022e54c3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3515" for this suite.

• [SLOW TEST:7.129 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3982,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:16.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jul 19 12:41:17.576: INFO: Waiting up to 5m0s for pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33" in namespace "containers-5470" to be "success or failure"
Jul 19 12:41:17.748: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33": Phase="Pending", Reason="", readiness=false. Elapsed: 171.866136ms
Jul 19 12:41:19.784: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207908701s
Jul 19 12:41:21.787: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210719801s
Jul 19 12:41:23.791: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33": Phase="Running", Reason="", readiness=true. Elapsed: 6.21434493s
Jul 19 12:41:25.794: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.217048113s
STEP: Saw pod success
Jul 19 12:41:25.794: INFO: Pod "client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33" satisfied condition "success or failure"
Jul 19 12:41:25.796: INFO: Trying to get logs from node jerma-worker pod client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33 container test-container: 
STEP: delete the pod
Jul 19 12:41:25.856: INFO: Waiting for pod client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33 to disappear
Jul 19 12:41:25.874: INFO: Pod client-containers-692d8559-34ce-49e9-8b8b-eda4a2acab33 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5470" for this suite.

• [SLOW TEST:9.059 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3999,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:25.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jul 19 12:41:25.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 19 12:41:26.107: INFO: stderr: ""
Jul 19 12:41:26.107: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45705\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45705/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:26.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9537" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":247,"skipped":4002,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:26.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 19 12:41:26.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7182'
Jul 19 12:41:26.344: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 19 12:41:26.344: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jul 19 12:41:26.398: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-nhpnv]
Jul 19 12:41:26.399: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-nhpnv" in namespace "kubectl-7182" to be "running and ready"
Jul 19 12:41:26.421: INFO: Pod "e2e-test-httpd-rc-nhpnv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.385534ms
Jul 19 12:41:28.583: INFO: Pod "e2e-test-httpd-rc-nhpnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184672159s
Jul 19 12:41:30.586: INFO: Pod "e2e-test-httpd-rc-nhpnv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187652256s
Jul 19 12:41:32.589: INFO: Pod "e2e-test-httpd-rc-nhpnv": Phase="Running", Reason="", readiness=true. Elapsed: 6.190603288s
Jul 19 12:41:32.589: INFO: Pod "e2e-test-httpd-rc-nhpnv" satisfied condition "running and ready"
Jul 19 12:41:32.589: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-nhpnv]
Jul 19 12:41:32.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7182'
Jul 19 12:41:32.691: INFO: stderr: ""
Jul 19 12:41:32.691: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.38. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.38. Set the 'ServerName' directive globally to suppress this message\n[Sun Jul 19 12:41:30.255485 2020] [mpm_event:notice] [pid 1:tid 139651475635048] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Jul 19 12:41:30.255531 2020] [core:notice] [pid 1:tid 139651475635048] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530
Jul 19 12:41:32.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7182'
Jul 19 12:41:32.796: INFO: stderr: ""
Jul 19 12:41:32.796: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:32.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7182" for this suite.

• [SLOW TEST:6.687 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":248,"skipped":4008,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:32.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:41:32.973: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 19 12:41:33.048: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 19 12:41:38.126: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 19 12:41:38.126: INFO: Creating deployment "test-rolling-update-deployment"
Jul 19 12:41:38.128: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 19 12:41:38.156: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 19 12:41:40.189: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 19 12:41:40.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:41:42.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759298, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:41:44.198: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul 19 12:41:44.203: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4186 /apis/apps/v1/namespaces/deployment-4186/deployments/test-rolling-update-deployment d7d5755b-9f9a-4ade-9880-ab9918e054ec 2429386 1 2020-07-19 12:41:38 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b0c0a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-19 12:41:38 +0000 UTC,LastTransitionTime:2020-07-19 12:41:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-07-19 12:41:42 +0000 UTC,LastTransitionTime:2020-07-19 12:41:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 19 12:41:44.205: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-4186 /apis/apps/v1/namespaces/deployment-4186/replicasets/test-rolling-update-deployment-67cf4f6444 14e341fd-8883-4c84-870d-07a626a4305a 2429375 1 2020-07-19 12:41:38 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d7d5755b-9f9a-4ade-9880-ab9918e054ec 0xc004a7c1c7 0xc004a7c1c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a7c238  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 19 12:41:44.205: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 19 12:41:44.205: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4186 /apis/apps/v1/namespaces/deployment-4186/replicasets/test-rolling-update-controller 23679c15-4b43-4062-b183-dfc6c3733ff9 2429384 2 2020-07-19 12:41:32 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d7d5755b-9f9a-4ade-9880-ab9918e054ec 0xc004a7c0f7 0xc004a7c0f8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a7c158  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 19 12:41:44.207: INFO: Pod "test-rolling-update-deployment-67cf4f6444-4vc6z" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-4vc6z test-rolling-update-deployment-67cf4f6444- deployment-4186 /api/v1/namespaces/deployment-4186/pods/test-rolling-update-deployment-67cf4f6444-4vc6z ad0731c9-d7fd-47ff-9910-b7a85b506996 2429374 0 2020-07-19 12:41:38 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 14e341fd-8883-4c84-870d-07a626a4305a 0xc004a7c6a7 0xc004a7c6a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vr2xz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vr2xz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vr2xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-19 12:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.40,StartTime:2020-07-19 12:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-19 12:41:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://13308d9ddf2ea9ca7977d3f301c5498736069bce339f6c5e0d9bb61e4ab96894,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:44.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4186" for this suite.

• [SLOW TEST:11.410 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":249,"skipped":4019,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:44.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:41:44.343: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d4b0d34c-dd3c-428d-bb08-7bf8149772a6", Controller:(*bool)(0xc004996ba2), BlockOwnerDeletion:(*bool)(0xc004996ba3)}}
Jul 19 12:41:44.355: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dea832ed-f9be-47ed-b40d-f4af887c3347", Controller:(*bool)(0xc004b0cb76), BlockOwnerDeletion:(*bool)(0xc004b0cb77)}}
Jul 19 12:41:44.385: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"daa9b4b2-fd4d-42ba-afe9-fe1e99c504e3", Controller:(*bool)(0xc004b0cde6), BlockOwnerDeletion:(*bool)(0xc004b0cde7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:41:49.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-640" for this suite.

• [SLOW TEST:5.627 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":250,"skipped":4050,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:41:49.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1060.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1060.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1060.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1060.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 19 12:42:03.776: INFO: DNS probes using dns-1060/dns-test-fb9b95e6-a5ff-4f62-9282-e37aa5af9a1d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:03.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1060" for this suite.

• [SLOW TEST:14.135 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":251,"skipped":4065,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:03.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-40503c12-524b-475f-9685-4f00e895644a
STEP: Creating a pod to test consume secrets
Jul 19 12:42:04.584: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11" in namespace "projected-9057" to be "success or failure"
Jul 19 12:42:04.607: INFO: Pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11": Phase="Pending", Reason="", readiness=false. Elapsed: 23.142136ms
Jul 19 12:42:06.611: INFO: Pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027022055s
Jul 19 12:42:08.614: INFO: Pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11": Phase="Running", Reason="", readiness=true. Elapsed: 4.030482138s
Jul 19 12:42:10.618: INFO: Pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034326821s
STEP: Saw pod success
Jul 19 12:42:10.618: INFO: Pod "pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11" satisfied condition "success or failure"
Jul 19 12:42:10.621: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11 container projected-secret-volume-test: 
STEP: delete the pod
Jul 19 12:42:10.644: INFO: Waiting for pod pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11 to disappear
Jul 19 12:42:10.654: INFO: Pod pod-projected-secrets-527fa69e-80af-44fe-a764-bd9bbf43fd11 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:10.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9057" for this suite.

• [SLOW TEST:6.689 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4078,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:10.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:15.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1301" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":253,"skipped":4084,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:15.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 19 12:42:15.718: INFO: Waiting up to 5m0s for pod "pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0" in namespace "emptydir-4988" to be "success or failure"
Jul 19 12:42:15.722: INFO: Pod "pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642613ms
Jul 19 12:42:17.750: INFO: Pod "pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03169312s
Jul 19 12:42:19.754: INFO: Pod "pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035754342s
STEP: Saw pod success
Jul 19 12:42:19.754: INFO: Pod "pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0" satisfied condition "success or failure"
Jul 19 12:42:19.757: INFO: Trying to get logs from node jerma-worker2 pod pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0 container test-container: 
STEP: delete the pod
Jul 19 12:42:19.782: INFO: Waiting for pod pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0 to disappear
Jul 19 12:42:19.786: INFO: Pod pod-a5799d5c-dcc8-4355-ac09-c24efb37e5b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:19.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4988" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4106,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:19.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 19 12:42:19.885: INFO: Waiting up to 5m0s for pod "pod-84d37c7a-51ce-4413-a76c-341a6c068e93" in namespace "emptydir-2433" to be "success or failure"
Jul 19 12:42:19.898: INFO: Pod "pod-84d37c7a-51ce-4413-a76c-341a6c068e93": Phase="Pending", Reason="", readiness=false. Elapsed: 13.351161ms
Jul 19 12:42:22.001: INFO: Pod "pod-84d37c7a-51ce-4413-a76c-341a6c068e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11665043s
Jul 19 12:42:24.005: INFO: Pod "pod-84d37c7a-51ce-4413-a76c-341a6c068e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120562846s
STEP: Saw pod success
Jul 19 12:42:24.005: INFO: Pod "pod-84d37c7a-51ce-4413-a76c-341a6c068e93" satisfied condition "success or failure"
Jul 19 12:42:24.008: INFO: Trying to get logs from node jerma-worker pod pod-84d37c7a-51ce-4413-a76c-341a6c068e93 container test-container: 
STEP: delete the pod
Jul 19 12:42:24.039: INFO: Waiting for pod pod-84d37c7a-51ce-4413-a76c-341a6c068e93 to disappear
Jul 19 12:42:24.050: INFO: Pod pod-84d37c7a-51ce-4413-a76c-341a6c068e93 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2433" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4124,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:24.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:42:24.512: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:42:26.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759344, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759344, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759344, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759344, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:42:29.882: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul 19 12:42:29.902: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:42:29.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6413" for this suite.
STEP: Destroying namespace "webhook-6413-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.780 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":256,"skipped":4124,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:42:30.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul 19 12:42:31.028: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 19 12:42:31.037: INFO: Waiting for terminating namespaces to be deleted...
Jul 19 12:42:31.039: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul 19 12:42:31.043: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 19 12:42:31.043: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 19 12:42:31.043: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 19 12:42:31.043: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 19 12:42:31.043: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul 19 12:42:31.047: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 19 12:42:31.047: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 19 12:42:31.047: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 19 12:42:31.047: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7af63cc3-c197-479a-b0e6-bb38bdcbc1a6 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-7af63cc3-c197-479a-b0e6-bb38bdcbc1a6 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7af63cc3-c197-479a-b0e6-bb38bdcbc1a6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:47:39.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4020" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:308.790 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":257,"skipped":4126,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:47:39.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 19 12:47:40.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 19 12:47:42.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:47:44.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:47:46.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:47:49.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:47:50.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 19 12:47:52.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730759660, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 19 12:47:55.266: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:47:55.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-103-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:47:58.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3150" for this suite.
STEP: Destroying namespace "webhook-3150-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.588 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":258,"skipped":4128,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:47:59.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1d1ab8d8-0a59-4bc5-b0cc-7344dc0f41f5
STEP: Creating a pod to test consume secrets
Jul 19 12:48:00.791: INFO: Waiting up to 5m0s for pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b" in namespace "secrets-7685" to be "success or failure"
Jul 19 12:48:00.794: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930702ms
Jul 19 12:48:02.836: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044724523s
Jul 19 12:48:04.842: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050203748s
Jul 19 12:48:06.917: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b": Phase="Running", Reason="", readiness=true. Elapsed: 6.125501932s
Jul 19 12:48:08.921: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12926678s
STEP: Saw pod success
Jul 19 12:48:08.921: INFO: Pod "pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b" satisfied condition "success or failure"
Jul 19 12:48:08.924: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b container secret-volume-test: 
STEP: delete the pod
Jul 19 12:48:08.992: INFO: Waiting for pod pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b to disappear
Jul 19 12:48:09.033: INFO: Pod pod-secrets-31e0bbec-b668-4faa-91eb-c6f371bba27b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:48:09.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7685" for this suite.

• [SLOW TEST:9.985 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4141,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:48:09.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-8263
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8263 to expose endpoints map[]
Jul 19 12:48:10.527: INFO: successfully validated that service endpoint-test2 in namespace services-8263 exposes endpoints map[] (113.646912ms elapsed)
STEP: Creating pod pod1 in namespace services-8263
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8263 to expose endpoints map[pod1:[80]]
Jul 19 12:48:15.081: INFO: successfully validated that service endpoint-test2 in namespace services-8263 exposes endpoints map[pod1:[80]] (4.315116812s elapsed)
STEP: Creating pod pod2 in namespace services-8263
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8263 to expose endpoints map[pod1:[80] pod2:[80]]
Jul 19 12:48:19.460: INFO: Unexpected endpoints: found map[0986bd60-e620-4f0a-a95f-4d6c769a3427:[80]], expected map[pod1:[80] pod2:[80]] (4.376235307s elapsed, will retry)
Jul 19 12:48:20.467: INFO: successfully validated that service endpoint-test2 in namespace services-8263 exposes endpoints map[pod1:[80] pod2:[80]] (5.383926203s elapsed)
STEP: Deleting pod pod1 in namespace services-8263
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8263 to expose endpoints map[pod2:[80]]
Jul 19 12:48:21.704: INFO: successfully validated that service endpoint-test2 in namespace services-8263 exposes endpoints map[pod2:[80]] (1.232756963s elapsed)
STEP: Deleting pod pod2 in namespace services-8263
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8263 to expose endpoints map[]
Jul 19 12:48:23.154: INFO: successfully validated that service endpoint-test2 in namespace services-8263 exposes endpoints map[] (1.445571171s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:48:23.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8263" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.209 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":260,"skipped":4162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:48:23.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1914
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-1914
Jul 19 12:48:23.830: INFO: Found 0 stateful pods, waiting for 1
Jul 19 12:48:33.835: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 19 12:48:33.897: INFO: Deleting all statefulset in ns statefulset-1914
Jul 19 12:48:34.193: INFO: Scaling statefulset ss to 0
Jul 19 12:48:54.575: INFO: Waiting for statefulset status.replicas updated to 0
Jul 19 12:48:54.578: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:48:54.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1914" for this suite.

• [SLOW TEST:31.350 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":261,"skipped":4167,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:48:54.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 19 12:48:55.051: INFO: Waiting up to 5m0s for pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b" in namespace "emptydir-2264" to be "success or failure"
Jul 19 12:48:55.065: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.764455ms
Jul 19 12:48:57.225: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174480824s
Jul 19 12:48:59.229: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178080537s
Jul 19 12:49:01.578: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526815913s
Jul 19 12:49:03.788: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.737147737s
STEP: Saw pod success
Jul 19 12:49:03.788: INFO: Pod "pod-049ddd8a-0043-4d8a-8333-0e14250ac59b" satisfied condition "success or failure"
Jul 19 12:49:03.793: INFO: Trying to get logs from node jerma-worker pod pod-049ddd8a-0043-4d8a-8333-0e14250ac59b container test-container: 
STEP: delete the pod
Jul 19 12:49:05.064: INFO: Waiting for pod pod-049ddd8a-0043-4d8a-8333-0e14250ac59b to disappear
Jul 19 12:49:05.450: INFO: Pod pod-049ddd8a-0043-4d8a-8333-0e14250ac59b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:49:05.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2264" for this suite.

• [SLOW TEST:10.718 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4184,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:49:05.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8353
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8353
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8353
Jul 19 12:49:06.204: INFO: Found 0 stateful pods, waiting for 1
Jul 19 12:49:16.440: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 19 12:49:16.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 19 12:49:17.479: INFO: stderr: "I0719 12:49:16.761190    3587 log.go:172] (0xc000ba0d10) (0xc000b98460) Create stream\nI0719 12:49:16.761231    3587 log.go:172] (0xc000ba0d10) (0xc000b98460) Stream added, broadcasting: 1\nI0719 12:49:16.763951    3587 log.go:172] (0xc000ba0d10) Reply frame received for 1\nI0719 12:49:16.763984    3587 log.go:172] (0xc000ba0d10) (0xc000a123c0) Create stream\nI0719 12:49:16.763991    3587 log.go:172] (0xc000ba0d10) (0xc000a123c0) Stream added, broadcasting: 3\nI0719 12:49:16.764866    3587 log.go:172] (0xc000ba0d10) Reply frame received for 3\nI0719 12:49:16.764902    3587 log.go:172] (0xc000ba0d10) (0xc000b98500) Create stream\nI0719 12:49:16.764920    3587 log.go:172] (0xc000ba0d10) (0xc000b98500) Stream added, broadcasting: 5\nI0719 12:49:16.765848    3587 log.go:172] (0xc000ba0d10) Reply frame received for 5\nI0719 12:49:16.814307    3587 log.go:172] (0xc000ba0d10) Data frame received for 5\nI0719 12:49:16.814325    3587 log.go:172] (0xc000b98500) (5) Data frame handling\nI0719 12:49:16.814335    3587 log.go:172] (0xc000b98500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:49:17.472912    3587 log.go:172] (0xc000ba0d10) Data frame received for 5\nI0719 12:49:17.472945    3587 log.go:172] (0xc000b98500) (5) Data frame handling\nI0719 12:49:17.472995    3587 log.go:172] (0xc000ba0d10) Data frame received for 3\nI0719 12:49:17.473006    3587 log.go:172] (0xc000a123c0) (3) Data frame handling\nI0719 12:49:17.473021    3587 log.go:172] (0xc000a123c0) (3) Data frame sent\nI0719 12:49:17.473110    3587 log.go:172] (0xc000ba0d10) Data frame received for 3\nI0719 12:49:17.473126    3587 log.go:172] (0xc000a123c0) (3) Data frame handling\nI0719 12:49:17.475758    3587 log.go:172] (0xc000ba0d10) Data frame received for 1\nI0719 12:49:17.475783    3587 log.go:172] (0xc000b98460) (1) Data frame handling\nI0719 12:49:17.475797    3587 log.go:172] (0xc000b98460) (1) Data frame sent\nI0719 12:49:17.475816    3587 log.go:172] (0xc000ba0d10) (0xc000b98460) Stream removed, broadcasting: 1\nI0719 12:49:17.475841    3587 log.go:172] (0xc000ba0d10) Go away received\nI0719 12:49:17.476174    3587 log.go:172] (0xc000ba0d10) (0xc000b98460) Stream removed, broadcasting: 1\nI0719 12:49:17.476198    3587 log.go:172] (0xc000ba0d10) (0xc000a123c0) Stream removed, broadcasting: 3\nI0719 12:49:17.476208    3587 log.go:172] (0xc000ba0d10) (0xc000b98500) Stream removed, broadcasting: 5\n"
Jul 19 12:49:17.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 19 12:49:17.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 19 12:49:17.516: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 19 12:49:27.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 19 12:49:27.577: INFO: Waiting for statefulset status.replicas updated to 0
Jul 19 12:49:27.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999954s
Jul 19 12:49:28.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.963177456s
Jul 19 12:49:30.289: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.95690664s
Jul 19 12:49:31.293: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.627684641s
Jul 19 12:49:32.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.623876143s
Jul 19 12:49:33.519: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.619461844s
Jul 19 12:49:34.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.397937012s
Jul 19 12:49:35.574: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.346000316s
Jul 19 12:49:36.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.342191308s
Jul 19 12:49:37.583: INFO: Verifying statefulset ss doesn't scale past 1 for another 337.959854ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8353
Jul 19 12:49:38.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 19 12:49:44.265: INFO: stderr: "I0719 12:49:44.205529    3607 log.go:172] (0xc0006c49a0) (0xc000540000) Create stream\nI0719 12:49:44.205564    3607 log.go:172] (0xc0006c49a0) (0xc000540000) Stream added, broadcasting: 1\nI0719 12:49:44.208506    3607 log.go:172] (0xc0006c49a0) Reply frame received for 1\nI0719 12:49:44.208534    3607 log.go:172] (0xc0006c49a0) (0xc00069af00) Create stream\nI0719 12:49:44.208546    3607 log.go:172] (0xc0006c49a0) (0xc00069af00) Stream added, broadcasting: 3\nI0719 12:49:44.209250    3607 log.go:172] (0xc0006c49a0) Reply frame received for 3\nI0719 12:49:44.209269    3607 log.go:172] (0xc0006c49a0) (0xc0005400a0) Create stream\nI0719 12:49:44.209275    3607 log.go:172] (0xc0006c49a0) (0xc0005400a0) Stream added, broadcasting: 5\nI0719 12:49:44.210227    3607 log.go:172] (0xc0006c49a0) Reply frame received for 5\nI0719 12:49:44.260915    3607 log.go:172] (0xc0006c49a0) Data frame received for 5\nI0719 12:49:44.260944    3607 log.go:172] (0xc0005400a0) (5) Data frame handling\nI0719 12:49:44.260954    3607 log.go:172] (0xc0005400a0) (5) Data frame sent\nI0719 12:49:44.260959    3607 log.go:172] (0xc0006c49a0) Data frame received for 5\nI0719 12:49:44.260963    3607 log.go:172] (0xc0005400a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:49:44.260975    3607 log.go:172] (0xc0006c49a0) Data frame received for 3\nI0719 12:49:44.260982    3607 log.go:172] (0xc00069af00) (3) Data frame handling\nI0719 12:49:44.260987    3607 log.go:172] (0xc00069af00) (3) Data frame sent\nI0719 12:49:44.261060    3607 log.go:172] (0xc0006c49a0) Data frame received for 3\nI0719 12:49:44.261087    3607 log.go:172] (0xc00069af00) (3) Data frame handling\nI0719 12:49:44.262387    3607 log.go:172] (0xc0006c49a0) Data frame received for 1\nI0719 12:49:44.262396    3607 log.go:172] (0xc000540000) (1) Data frame handling\nI0719 12:49:44.262402    3607 log.go:172] (0xc000540000) (1) Data frame sent\nI0719 12:49:44.262413    3607 log.go:172] (0xc0006c49a0) (0xc000540000) Stream removed, broadcasting: 1\nI0719 12:49:44.262444    3607 log.go:172] (0xc0006c49a0) Go away received\nI0719 12:49:44.262615    3607 log.go:172] (0xc0006c49a0) (0xc000540000) Stream removed, broadcasting: 1\nI0719 12:49:44.262624    3607 log.go:172] (0xc0006c49a0) (0xc00069af00) Stream removed, broadcasting: 3\nI0719 12:49:44.262633    3607 log.go:172] (0xc0006c49a0) (0xc0005400a0) Stream removed, broadcasting: 5\n"
Jul 19 12:49:44.265: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 19 12:49:44.265: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 19 12:49:44.268: INFO: Found 1 stateful pods, waiting for 3
Jul 19 12:49:54.338: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 19 12:49:54.338: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 19 12:49:54.338: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 19 12:50:04.465: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 19 12:50:04.465: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 19 12:50:04.465: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 19 12:50:04.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 19 12:50:05.292: INFO: stderr: "I0719 12:50:05.207984    3630 log.go:172] (0xc000be93f0) (0xc000cdc500) Create stream\nI0719 12:50:05.208033    3630 log.go:172] (0xc000be93f0) (0xc000cdc500) Stream added, broadcasting: 1\nI0719 12:50:05.211481    3630 log.go:172] (0xc000be93f0) Reply frame received for 1\nI0719 12:50:05.211510    3630 log.go:172] (0xc000be93f0) (0xc00061bea0) Create stream\nI0719 12:50:05.211517    3630 log.go:172] (0xc000be93f0) (0xc00061bea0) Stream added, broadcasting: 3\nI0719 12:50:05.212213    3630 log.go:172] (0xc000be93f0) Reply frame received for 3\nI0719 12:50:05.212240    3630 log.go:172] (0xc000be93f0) (0xc000428c80) Create stream\nI0719 12:50:05.212249    3630 log.go:172] (0xc000be93f0) (0xc000428c80) Stream added, broadcasting: 5\nI0719 12:50:05.213040    3630 log.go:172] (0xc000be93f0) Reply frame received for 5\nI0719 12:50:05.286537    3630 log.go:172] (0xc000be93f0) Data frame received for 3\nI0719 12:50:05.286587    3630 log.go:172] (0xc00061bea0) (3) Data frame handling\nI0719 12:50:05.286604    3630 log.go:172] (0xc00061bea0) (3) Data frame sent\nI0719 12:50:05.286613    3630 log.go:172] (0xc000be93f0) Data frame received for 3\nI0719 12:50:05.286621    3630 log.go:172] (0xc00061bea0) (3) Data frame handling\nI0719 12:50:05.286673    3630 log.go:172] (0xc000be93f0) Data frame received for 5\nI0719 12:50:05.286700    3630 log.go:172] (0xc000428c80) (5) Data frame handling\nI0719 12:50:05.286726    3630 log.go:172] (0xc000428c80) (5) Data frame sent\nI0719 12:50:05.286737    3630 log.go:172] (0xc000be93f0) Data frame received for 5\nI0719 12:50:05.286743    3630 log.go:172] (0xc000428c80) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:50:05.287937    3630 log.go:172] (0xc000be93f0) Data frame received for 1\nI0719 12:50:05.287950    3630 log.go:172] (0xc000cdc500) (1) Data frame handling\nI0719 12:50:05.287962    3630 log.go:172] (0xc000cdc500) (1) Data frame sent\nI0719 12:50:05.287970    3630 log.go:172] (0xc000be93f0) (0xc000cdc500) Stream removed, broadcasting: 1\nI0719 12:50:05.287979    3630 log.go:172] (0xc000be93f0) Go away received\nI0719 12:50:05.288408    3630 log.go:172] (0xc000be93f0) (0xc000cdc500) Stream removed, broadcasting: 1\nI0719 12:50:05.288427    3630 log.go:172] (0xc000be93f0) (0xc00061bea0) Stream removed, broadcasting: 3\nI0719 12:50:05.288437    3630 log.go:172] (0xc000be93f0) (0xc000428c80) Stream removed, broadcasting: 5\n"
Jul 19 12:50:05.292: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 19 12:50:05.292: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 19 12:50:05.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 19 12:50:06.336: INFO: stderr: "I0719 12:50:06.127105    3652 log.go:172] (0xc0000f5130) (0xc00075ba40) Create stream\nI0719 12:50:06.127155    3652 log.go:172] (0xc0000f5130) (0xc00075ba40) Stream added, broadcasting: 1\nI0719 12:50:06.129310    3652 log.go:172] (0xc0000f5130) Reply frame received for 1\nI0719 12:50:06.129351    3652 log.go:172] (0xc0000f5130) (0xc000b36000) Create stream\nI0719 12:50:06.129363    3652 log.go:172] (0xc0000f5130) (0xc000b36000) Stream added, broadcasting: 3\nI0719 12:50:06.130108    3652 log.go:172] (0xc0000f5130) Reply frame received for 3\nI0719 12:50:06.130153    3652 log.go:172] (0xc0000f5130) (0xc00075bc20) Create stream\nI0719 12:50:06.130171    3652 log.go:172] (0xc0000f5130) (0xc00075bc20) Stream added, broadcasting: 5\nI0719 12:50:06.130873    3652 log.go:172] (0xc0000f5130) Reply frame received for 5\nI0719 12:50:06.195023    3652 log.go:172] (0xc0000f5130) Data frame received for 5\nI0719 12:50:06.195060    3652 log.go:172] (0xc00075bc20) (5) Data frame handling\nI0719 12:50:06.195085    3652 log.go:172] (0xc00075bc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:50:06.329480    3652 log.go:172] (0xc0000f5130) Data frame received for 3\nI0719 12:50:06.329523    3652 log.go:172] (0xc000b36000) (3) Data frame handling\nI0719 12:50:06.329551    3652 log.go:172] (0xc000b36000) (3) Data frame sent\nI0719 12:50:06.329783    3652 log.go:172] (0xc0000f5130) Data frame received for 5\nI0719 12:50:06.329810    3652 log.go:172] (0xc00075bc20) (5) Data frame handling\nI0719 12:50:06.329857    3652 log.go:172] (0xc0000f5130) Data frame received for 3\nI0719 12:50:06.329881    3652 log.go:172] (0xc000b36000) (3) Data frame handling\nI0719 12:50:06.331712    3652 log.go:172] (0xc0000f5130) Data frame received for 1\nI0719 12:50:06.331746    3652 log.go:172] (0xc00075ba40) (1) Data frame handling\nI0719 12:50:06.331764    3652 log.go:172] (0xc00075ba40) (1) Data frame sent\nI0719 12:50:06.331782    3652 log.go:172] (0xc0000f5130) (0xc00075ba40) Stream removed, broadcasting: 1\nI0719 12:50:06.331820    3652 log.go:172] (0xc0000f5130) Go away received\nI0719 12:50:06.332337    3652 log.go:172] (0xc0000f5130) (0xc00075ba40) Stream removed, broadcasting: 1\nI0719 12:50:06.332363    3652 log.go:172] (0xc0000f5130) (0xc000b36000) Stream removed, broadcasting: 3\nI0719 12:50:06.332378    3652 log.go:172] (0xc0000f5130) (0xc00075bc20) Stream removed, broadcasting: 5\n"
Jul 19 12:50:06.336: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 19 12:50:06.336: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 19 12:50:06.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 19 12:50:07.175: INFO: stderr: "I0719 12:50:06.983637    3672 log.go:172] (0xc000107290) (0xc0006f3c20) Create stream\nI0719 12:50:06.983693    3672 log.go:172] (0xc000107290) (0xc0006f3c20) Stream added, broadcasting: 1\nI0719 12:50:06.986202    3672 log.go:172] (0xc000107290) Reply frame received for 1\nI0719 12:50:06.986254    3672 log.go:172] (0xc000107290) (0xc0006f3e00) Create stream\nI0719 12:50:06.986268    3672 log.go:172] (0xc000107290) (0xc0006f3e00) Stream added, broadcasting: 3\nI0719 12:50:06.987214    3672 log.go:172] (0xc000107290) Reply frame received for 3\nI0719 12:50:06.987238    3672 log.go:172] (0xc000107290) (0xc0006f3ea0) Create stream\nI0719 12:50:06.987243    3672 log.go:172] (0xc000107290) (0xc0006f3ea0) Stream added, broadcasting: 5\nI0719 12:50:06.988124    3672 log.go:172] (0xc000107290) Reply frame received for 5\nI0719 12:50:07.057206    3672 log.go:172] (0xc000107290) Data frame received for 5\nI0719 12:50:07.057231    3672 log.go:172] (0xc0006f3ea0) (5) Data frame handling\nI0719 12:50:07.057246    3672 log.go:172] (0xc0006f3ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0719 12:50:07.168218    3672 log.go:172] (0xc000107290) Data frame received for 3\nI0719 12:50:07.168259    3672 log.go:172] (0xc0006f3e00) (3) Data frame handling\nI0719 12:50:07.168282    3672 log.go:172] (0xc0006f3e00) (3) Data frame sent\nI0719 12:50:07.168455    3672 log.go:172] (0xc000107290) Data frame received for 3\nI0719 12:50:07.168483    3672 log.go:172] (0xc0006f3e00) (3) Data frame handling\nI0719 12:50:07.168502    3672 log.go:172] (0xc000107290) Data frame received for 5\nI0719 12:50:07.168517    3672 log.go:172] (0xc0006f3ea0) (5) Data frame handling\nI0719 12:50:07.170146    3672 log.go:172] (0xc000107290) Data frame received for 1\nI0719 12:50:07.170172    3672 log.go:172] (0xc0006f3c20) (1) Data frame handling\nI0719 12:50:07.170186    3672 log.go:172] (0xc0006f3c20) (1) Data frame sent\nI0719 12:50:07.170202    3672 log.go:172] (0xc000107290) (0xc0006f3c20) Stream removed, broadcasting: 1\nI0719 12:50:07.170224    3672 log.go:172] (0xc000107290) Go away received\nI0719 12:50:07.170536    3672 log.go:172] (0xc000107290) (0xc0006f3c20) Stream removed, broadcasting: 1\nI0719 12:50:07.170559    3672 log.go:172] (0xc000107290) (0xc0006f3e00) Stream removed, broadcasting: 3\nI0719 12:50:07.170569    3672 log.go:172] (0xc000107290) (0xc0006f3ea0) Stream removed, broadcasting: 5\n"
Jul 19 12:50:07.175: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 19 12:50:07.175: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 19 12:50:07.175: INFO: Waiting for statefulset status.replicas updated to 0
Jul 19 12:50:07.187: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jul 19 12:50:17.236: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 19 12:50:17.236: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 19 12:50:17.236: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 19 12:50:17.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999766s
Jul 19 12:50:18.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98230072s
Jul 19 12:50:19.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.927742571s
Jul 19 12:50:20.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.903833774s
Jul 19 12:50:21.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.831690881s
Jul 19 12:50:22.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.826546949s
Jul 19 12:50:23.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82196958s
Jul 19 12:50:24.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.816779294s
Jul 19 12:50:25.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.802077842s
Jul 19 12:50:26.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 785.856393ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8353
Jul 19 12:50:27.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 19 12:50:28.060: INFO: stderr: "I0719 12:50:27.936954    3694 log.go:172] (0xc000118dc0) (0xc0006619a0) Create stream\nI0719 12:50:27.937032    3694 log.go:172] (0xc000118dc0) (0xc0006619a0) Stream added, broadcasting: 1\nI0719 12:50:27.939872    3694 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0719 12:50:27.939907    3694 log.go:172] (0xc000118dc0) (0xc00079c000) Create stream\nI0719 12:50:27.939915    3694 log.go:172] (0xc000118dc0) (0xc00079c000) Stream added, broadcasting: 3\nI0719 12:50:27.941160    3694 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0719 12:50:27.941214    3694 log.go:172] (0xc000118dc0) (0xc00079c140) Create stream\nI0719 12:50:27.941231    3694 log.go:172] (0xc000118dc0) (0xc00079c140) Stream added, broadcasting: 5\nI0719 12:50:27.942410    3694 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0719 12:50:28.012900    3694 log.go:172] (0xc000118dc0) Data frame received for 5\nI0719 12:50:28.012923    3694 log.go:172] (0xc00079c140) (5) Data frame handling\nI0719 12:50:28.012933    3694 log.go:172] (0xc00079c140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:50:28.054076    3694 log.go:172] (0xc000118dc0) Data frame received for 5\nI0719 12:50:28.054134    3694 log.go:172] (0xc00079c140) (5) Data frame handling\nI0719 12:50:28.054184    3694 log.go:172] (0xc000118dc0) Data frame received for 3\nI0719 12:50:28.054206    3694 log.go:172] (0xc00079c000) (3) Data frame handling\nI0719 12:50:28.054226    3694 log.go:172] (0xc00079c000) (3) Data frame sent\nI0719 12:50:28.054251    3694 log.go:172] (0xc000118dc0) Data frame received for 3\nI0719 12:50:28.054275    3694 log.go:172] (0xc00079c000) (3) Data frame handling\nI0719 12:50:28.056293    3694 log.go:172] (0xc000118dc0) Data frame received for 1\nI0719 12:50:28.056344    3694 log.go:172] (0xc0006619a0) (1) Data frame handling\nI0719 12:50:28.056376    3694 log.go:172] (0xc0006619a0) (1) Data frame sent\nI0719 12:50:28.056402    3694 log.go:172] (0xc000118dc0) (0xc0006619a0) Stream removed, broadcasting: 1\nI0719 12:50:28.056431    3694 log.go:172] (0xc000118dc0) Go away received\nI0719 12:50:28.057122    3694 log.go:172] (0xc000118dc0) (0xc0006619a0) Stream removed, broadcasting: 1\nI0719 12:50:28.057155    3694 log.go:172] (0xc000118dc0) (0xc00079c000) Stream removed, broadcasting: 3\nI0719 12:50:28.057174    3694 log.go:172] (0xc000118dc0) (0xc00079c140) Stream removed, broadcasting: 5\n"
Jul 19 12:50:28.061: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 19 12:50:28.061: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 19 12:50:28.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 19 12:50:28.283: INFO: stderr: "I0719 12:50:28.192416    3715 log.go:172] (0xc000231130) (0xc000685ae0) Create stream\nI0719 12:50:28.192463    3715 log.go:172] (0xc000231130) (0xc000685ae0) Stream added, broadcasting: 1\nI0719 12:50:28.194970    3715 log.go:172] (0xc000231130) Reply frame received for 1\nI0719 12:50:28.195017    3715 log.go:172] (0xc000231130) (0xc000944000) Create stream\nI0719 12:50:28.195032    3715 log.go:172] (0xc000231130) (0xc000944000) Stream added, broadcasting: 3\nI0719 12:50:28.195910    3715 log.go:172] (0xc000231130) Reply frame received for 3\nI0719 12:50:28.195949    3715 log.go:172] (0xc000231130) (0xc000685cc0) Create stream\nI0719 12:50:28.195964    3715 log.go:172] (0xc000231130) (0xc000685cc0) Stream added, broadcasting: 5\nI0719 12:50:28.196923    3715 log.go:172] (0xc000231130) Reply frame received for 5\nI0719 12:50:28.245254    3715 log.go:172] (0xc000231130) Data frame received for 5\nI0719 12:50:28.245278    3715 log.go:172] (0xc000685cc0) (5) Data frame handling\nI0719 12:50:28.245291    3715 log.go:172] (0xc000685cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:50:28.277192    3715 log.go:172] (0xc000231130) Data frame received for 3\nI0719 12:50:28.277223    3715 log.go:172] (0xc000944000) (3) Data frame handling\nI0719 12:50:28.277247    3715 log.go:172] (0xc000944000) (3) Data frame sent\nI0719 12:50:28.277589    3715 log.go:172] (0xc000231130) Data frame received for 3\nI0719 12:50:28.277612    3715 log.go:172] (0xc000944000) (3) Data frame handling\nI0719 12:50:28.277661    3715 log.go:172] (0xc000231130) Data frame received for 5\nI0719 12:50:28.277689    3715 log.go:172] (0xc000685cc0) (5) Data frame handling\nI0719 12:50:28.279554    3715 log.go:172] (0xc000231130) Data frame received for 1\nI0719 12:50:28.279583    3715 log.go:172] (0xc000685ae0) (1) Data frame handling\nI0719 12:50:28.279602    3715 log.go:172] (0xc000685ae0) (1) Data frame sent\nI0719 12:50:28.279617    3715 log.go:172] (0xc000231130) (0xc000685ae0) Stream removed, broadcasting: 1\nI0719 12:50:28.279718    3715 log.go:172] (0xc000231130) Go away received\nI0719 12:50:28.279975    3715 log.go:172] (0xc000231130) (0xc000685ae0) Stream removed, broadcasting: 1\nI0719 12:50:28.280005    3715 log.go:172] (0xc000231130) (0xc000944000) Stream removed, broadcasting: 3\nI0719 12:50:28.280021    3715 log.go:172] (0xc000231130) (0xc000685cc0) Stream removed, broadcasting: 5\n"
Jul 19 12:50:28.283: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 19 12:50:28.283: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 19 12:50:28.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8353 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 19 12:50:28.646: INFO: stderr: "I0719 12:50:28.574898    3736 log.go:172] (0xc0008e7290) (0xc0009685a0) Create stream\nI0719 12:50:28.574947    3736 log.go:172] (0xc0008e7290) (0xc0009685a0) Stream added, broadcasting: 1\nI0719 12:50:28.578227    3736 log.go:172] (0xc0008e7290) Reply frame received for 1\nI0719 12:50:28.578260    3736 log.go:172] (0xc0008e7290) (0xc0007ebb80) Create stream\nI0719 12:50:28.578267    3736 log.go:172] (0xc0008e7290) (0xc0007ebb80) Stream added, broadcasting: 3\nI0719 12:50:28.578997    3736 log.go:172] (0xc0008e7290) Reply frame received for 3\nI0719 12:50:28.579020    3736 log.go:172] (0xc0008e7290) (0xc0006fa780) Create stream\nI0719 12:50:28.579026    3736 log.go:172] (0xc0008e7290) (0xc0006fa780) Stream added, broadcasting: 5\nI0719 12:50:28.579905    3736 log.go:172] (0xc0008e7290) Reply frame received for 5\nI0719 12:50:28.639533    3736 log.go:172] (0xc0008e7290) Data frame received for 5\nI0719 12:50:28.639571    3736 log.go:172] (0xc0006fa780) (5) Data frame handling\nI0719 12:50:28.639583    3736 log.go:172] (0xc0006fa780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0719 12:50:28.639615    3736 log.go:172] (0xc0008e7290) Data frame received for 3\nI0719 12:50:28.639643    3736 log.go:172] (0xc0007ebb80) (3) Data frame handling\nI0719 12:50:28.639663    3736 log.go:172] (0xc0007ebb80) (3) Data frame sent\nI0719 12:50:28.639676    3736 log.go:172] (0xc0008e7290) Data frame received for 3\nI0719 12:50:28.639686    3736 log.go:172] (0xc0007ebb80) (3) Data frame handling\nI0719 12:50:28.639863    3736 log.go:172] (0xc0008e7290) Data frame received for 5\nI0719 12:50:28.639880    3736 log.go:172] (0xc0006fa780) (5) Data frame handling\nI0719 12:50:28.641427    3736 log.go:172] (0xc0008e7290) Data frame received for 1\nI0719 12:50:28.641446    3736 log.go:172] (0xc0009685a0) (1) Data frame handling\nI0719 12:50:28.641453    3736 log.go:172] (0xc0009685a0) (1) Data frame sent\nI0719 12:50:28.641462    3736 log.go:172] (0xc0008e7290) (0xc0009685a0) Stream removed, broadcasting: 1\nI0719 12:50:28.641471    3736 log.go:172] (0xc0008e7290) Go away received\nI0719 12:50:28.641832    3736 log.go:172] (0xc0008e7290) (0xc0009685a0) Stream removed, broadcasting: 1\nI0719 12:50:28.641855    3736 log.go:172] (0xc0008e7290) (0xc0007ebb80) Stream removed, broadcasting: 3\nI0719 12:50:28.641865    3736 log.go:172] (0xc0008e7290) (0xc0006fa780) Stream removed, broadcasting: 5\n"
Jul 19 12:50:28.646: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 19 12:50:28.646: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 19 12:50:28.646: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 19 12:51:08.812: INFO: Deleting all statefulset in ns statefulset-8353
Jul 19 12:51:08.816: INFO: Scaling statefulset ss to 0
Jul 19 12:51:08.824: INFO: Waiting for statefulset status.replicas updated to 0
Jul 19 12:51:08.825: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:51:08.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8353" for this suite.

• [SLOW TEST:123.399 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":263,"skipped":4185,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:51:08.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:51:22.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-117" for this suite.

• [SLOW TEST:13.650 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":264,"skipped":4188,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:51:22.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jul 19 12:51:22.726: INFO: Waiting up to 5m0s for pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76" in namespace "var-expansion-3644" to be "success or failure"
Jul 19 12:51:22.738: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 11.386855ms
Jul 19 12:51:24.742: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015397549s
Jul 19 12:51:27.255: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529119146s
Jul 19 12:51:30.161: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 7.434629216s
Jul 19 12:51:32.291: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Running", Reason="", readiness=true. Elapsed: 9.565102039s
Jul 19 12:51:34.627: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.900230889s
STEP: Saw pod success
Jul 19 12:51:34.627: INFO: Pod "var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76" satisfied condition "success or failure"
Jul 19 12:51:34.716: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76 container dapi-container: 
STEP: delete the pod
Jul 19 12:51:35.442: INFO: Waiting for pod var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76 to disappear
Jul 19 12:51:35.452: INFO: Pod var-expansion-d2332a32-9b2d-4500-866b-598fff5f1f76 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:51:35.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3644" for this suite.

• [SLOW TEST:12.927 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4209,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:51:35.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-e6917720-febc-4b95-875b-e237e51cb15a
STEP: Creating a pod to test consume configMaps
Jul 19 12:51:36.438: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5" in namespace "projected-3271" to be "success or failure"
Jul 19 12:51:36.836: INFO: Pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5": Phase="Pending", Reason="", readiness=false. Elapsed: 398.446453ms
Jul 19 12:51:38.841: INFO: Pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402853605s
Jul 19 12:51:41.004: INFO: Pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.56658901s
Jul 19 12:51:43.008: INFO: Pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.569638794s
STEP: Saw pod success
Jul 19 12:51:43.008: INFO: Pod "pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5" satisfied condition "success or failure"
Jul 19 12:51:43.010: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 19 12:51:43.029: INFO: Waiting for pod pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5 to disappear
Jul 19 12:51:43.159: INFO: Pod pod-projected-configmaps-77cfc917-918d-405a-b15d-3b1807d328d5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:51:43.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3271" for this suite.

• [SLOW TEST:7.708 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4233,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:51:43.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:51:43.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 19 12:51:46.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 create -f -'
Jul 19 12:51:56.436: INFO: stderr: ""
Jul 19 12:51:56.436: INFO: stdout: "e2e-test-crd-publish-openapi-8607-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 19 12:51:56.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 delete e2e-test-crd-publish-openapi-8607-crds test-cr'
Jul 19 12:51:56.587: INFO: stderr: ""
Jul 19 12:51:56.587: INFO: stdout: "e2e-test-crd-publish-openapi-8607-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul 19 12:51:56.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 apply -f -'
Jul 19 12:51:56.928: INFO: stderr: ""
Jul 19 12:51:56.928: INFO: stdout: "e2e-test-crd-publish-openapi-8607-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 19 12:51:56.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 delete e2e-test-crd-publish-openapi-8607-crds test-cr'
Jul 19 12:51:57.327: INFO: stderr: ""
Jul 19 12:51:57.327: INFO: stdout: "e2e-test-crd-publish-openapi-8607-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 19 12:51:57.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8607-crds'
Jul 19 12:51:58.822: INFO: stderr: ""
Jul 19 12:51:58.822: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8607-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:52:01.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6025" for this suite.

• [SLOW TEST:18.578 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":267,"skipped":4244,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:52:01.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6769
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-6769
I0719 12:52:02.694458       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6769, replica count: 2
I0719 12:52:05.744959       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:52:08.745134       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:52:11.745369       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 19 12:52:11.745: INFO: Creating new exec pod
Jul 19 12:52:18.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6769 execpodd8ggs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 19 12:52:19.163: INFO: stderr: "I0719 12:52:19.094241    3868 log.go:172] (0xc00094e790) (0xc00090a000) Create stream\nI0719 12:52:19.094316    3868 log.go:172] (0xc00094e790) (0xc00090a000) Stream added, broadcasting: 1\nI0719 12:52:19.097867    3868 log.go:172] (0xc00094e790) Reply frame received for 1\nI0719 12:52:19.097894    3868 log.go:172] (0xc00094e790) (0xc0006f9a40) Create stream\nI0719 12:52:19.097902    3868 log.go:172] (0xc00094e790) (0xc0006f9a40) Stream added, broadcasting: 3\nI0719 12:52:19.098980    3868 log.go:172] (0xc00094e790) Reply frame received for 3\nI0719 12:52:19.099026    3868 log.go:172] (0xc00094e790) (0xc0006f9c20) Create stream\nI0719 12:52:19.099042    3868 log.go:172] (0xc00094e790) (0xc0006f9c20) Stream added, broadcasting: 5\nI0719 12:52:19.099966    3868 log.go:172] (0xc00094e790) Reply frame received for 5\nI0719 12:52:19.156817    3868 log.go:172] (0xc00094e790) Data frame received for 5\nI0719 12:52:19.156849    3868 log.go:172] (0xc0006f9c20) (5) Data frame handling\nI0719 12:52:19.156861    3868 log.go:172] (0xc0006f9c20) (5) Data frame sent\nI0719 12:52:19.156868    3868 log.go:172] (0xc00094e790) Data frame received for 5\nI0719 12:52:19.156872    3868 log.go:172] (0xc0006f9c20) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0719 12:52:19.156886    3868 log.go:172] (0xc0006f9c20) (5) Data frame sent\nI0719 12:52:19.157274    3868 log.go:172] (0xc00094e790) Data frame received for 5\nI0719 12:52:19.157301    3868 log.go:172] (0xc0006f9c20) (5) Data frame handling\nI0719 12:52:19.157318    3868 log.go:172] (0xc00094e790) Data frame received for 3\nI0719 12:52:19.157322    3868 log.go:172] (0xc0006f9a40) (3) Data frame handling\nI0719 12:52:19.159335    3868 log.go:172] (0xc00094e790) Data frame received for 1\nI0719 12:52:19.159353    3868 log.go:172] (0xc00090a000) (1) Data frame handling\nI0719 12:52:19.159365    3868 log.go:172] (0xc00090a000) (1) Data frame sent\nI0719 12:52:19.159378    3868 log.go:172] (0xc00094e790) (0xc00090a000) Stream removed, broadcasting: 1\nI0719 12:52:19.159397    3868 log.go:172] (0xc00094e790) Go away received\nI0719 12:52:19.159755    3868 log.go:172] (0xc00094e790) (0xc00090a000) Stream removed, broadcasting: 1\nI0719 12:52:19.159773    3868 log.go:172] (0xc00094e790) (0xc0006f9a40) Stream removed, broadcasting: 3\nI0719 12:52:19.159780    3868 log.go:172] (0xc00094e790) (0xc0006f9c20) Stream removed, broadcasting: 5\n"
Jul 19 12:52:19.163: INFO: stdout: ""
Jul 19 12:52:19.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6769 execpodd8ggs -- /bin/sh -x -c nc -zv -t -w 2 10.96.0.222 80'
Jul 19 12:52:19.356: INFO: stderr: "I0719 12:52:19.296284    3891 log.go:172] (0xc0000f71e0) (0xc000bae000) Create stream\nI0719 12:52:19.296354    3891 log.go:172] (0xc0000f71e0) (0xc000bae000) Stream added, broadcasting: 1\nI0719 12:52:19.298816    3891 log.go:172] (0xc0000f71e0) Reply frame received for 1\nI0719 12:52:19.298885    3891 log.go:172] (0xc0000f71e0) (0xc000bae0a0) Create stream\nI0719 12:52:19.298901    3891 log.go:172] (0xc0000f71e0) (0xc000bae0a0) Stream added, broadcasting: 3\nI0719 12:52:19.299912    3891 log.go:172] (0xc0000f71e0) Reply frame received for 3\nI0719 12:52:19.299949    3891 log.go:172] (0xc0000f71e0) (0xc000bae140) Create stream\nI0719 12:52:19.299963    3891 log.go:172] (0xc0000f71e0) (0xc000bae140) Stream added, broadcasting: 5\nI0719 12:52:19.300910    3891 log.go:172] (0xc0000f71e0) Reply frame received for 5\nI0719 12:52:19.350411    3891 log.go:172] (0xc0000f71e0) Data frame received for 3\nI0719 12:52:19.350467    3891 log.go:172] (0xc000bae0a0) (3) Data frame handling\nI0719 12:52:19.350495    3891 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0719 12:52:19.350507    3891 log.go:172] (0xc000bae140) (5) Data frame handling\nI0719 12:52:19.350521    3891 log.go:172] (0xc000bae140) (5) Data frame sent\nI0719 12:52:19.350541    3891 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0719 12:52:19.350553    3891 log.go:172] (0xc000bae140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.0.222 80\nConnection to 10.96.0.222 80 port [tcp/http] succeeded!\nI0719 12:52:19.351664    3891 log.go:172] (0xc0000f71e0) Data frame received for 1\nI0719 12:52:19.351684    3891 log.go:172] (0xc000bae000) (1) Data frame handling\nI0719 12:52:19.351706    3891 log.go:172] (0xc000bae000) (1) Data frame sent\nI0719 12:52:19.351723    3891 log.go:172] (0xc0000f71e0) (0xc000bae000) Stream removed, broadcasting: 1\nI0719 12:52:19.351740    3891 log.go:172] (0xc0000f71e0) Go away received\nI0719 12:52:19.352121    3891 log.go:172] (0xc0000f71e0) (0xc000bae000) Stream removed, broadcasting: 1\nI0719 12:52:19.352142    3891 log.go:172] (0xc0000f71e0) (0xc000bae0a0) Stream removed, broadcasting: 3\nI0719 12:52:19.352151    3891 log.go:172] (0xc0000f71e0) (0xc000bae140) Stream removed, broadcasting: 5\n"
Jul 19 12:52:19.356: INFO: stdout: ""
Jul 19 12:52:19.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6769 execpodd8ggs -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31033'
Jul 19 12:52:19.573: INFO: stderr: "I0719 12:52:19.494927    3912 log.go:172] (0xc000ae1290) (0xc000af6640) Create stream\nI0719 12:52:19.494997    3912 log.go:172] (0xc000ae1290) (0xc000af6640) Stream added, broadcasting: 1\nI0719 12:52:19.499517    3912 log.go:172] (0xc000ae1290) Reply frame received for 1\nI0719 12:52:19.499562    3912 log.go:172] (0xc000ae1290) (0xc000612780) Create stream\nI0719 12:52:19.499575    3912 log.go:172] (0xc000ae1290) (0xc000612780) Stream added, broadcasting: 3\nI0719 12:52:19.500494    3912 log.go:172] (0xc000ae1290) Reply frame received for 3\nI0719 12:52:19.500527    3912 log.go:172] (0xc000ae1290) (0xc0003ed540) Create stream\nI0719 12:52:19.500538    3912 log.go:172] (0xc000ae1290) (0xc0003ed540) Stream added, broadcasting: 5\nI0719 12:52:19.501489    3912 log.go:172] (0xc000ae1290) Reply frame received for 5\nI0719 12:52:19.565774    3912 log.go:172] (0xc000ae1290) Data frame received for 5\nI0719 12:52:19.565821    3912 log.go:172] (0xc0003ed540) (5) Data frame handling\nI0719 12:52:19.565856    3912 log.go:172] (0xc0003ed540) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31033\nConnection to 172.18.0.6 31033 port [tcp/31033] succeeded!\nI0719 12:52:19.565983    3912 log.go:172] (0xc000ae1290) Data frame received for 5\nI0719 12:52:19.566013    3912 log.go:172] (0xc0003ed540) (5) Data frame handling\nI0719 12:52:19.566262    3912 log.go:172] (0xc000ae1290) Data frame received for 3\nI0719 12:52:19.566299    3912 log.go:172] (0xc000612780) (3) Data frame handling\nI0719 12:52:19.568066    3912 log.go:172] (0xc000ae1290) Data frame received for 1\nI0719 12:52:19.568092    3912 log.go:172] (0xc000af6640) (1) Data frame handling\nI0719 12:52:19.568115    3912 log.go:172] (0xc000af6640) (1) Data frame sent\nI0719 12:52:19.568131    3912 log.go:172] (0xc000ae1290) (0xc000af6640) Stream removed, broadcasting: 1\nI0719 12:52:19.568153    3912 log.go:172] (0xc000ae1290) Go away received\nI0719 12:52:19.568511    3912 log.go:172] (0xc000ae1290) (0xc000af6640) Stream removed, broadcasting: 1\nI0719 12:52:19.568533    3912 log.go:172] (0xc000ae1290) (0xc000612780) Stream removed, broadcasting: 3\nI0719 12:52:19.568548    3912 log.go:172] (0xc000ae1290) (0xc0003ed540) Stream removed, broadcasting: 5\n"
Jul 19 12:52:19.573: INFO: stdout: ""
Jul 19 12:52:19.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6769 execpodd8ggs -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31033'
Jul 19 12:52:19.915: INFO: stderr: "I0719 12:52:19.841866    3933 log.go:172] (0xc0009ce000) (0xc0006b40a0) Create stream\nI0719 12:52:19.841942    3933 log.go:172] (0xc0009ce000) (0xc0006b40a0) Stream added, broadcasting: 1\nI0719 12:52:19.844789    3933 log.go:172] (0xc0009ce000) Reply frame received for 1\nI0719 12:52:19.844832    3933 log.go:172] (0xc0009ce000) (0xc000808000) Create stream\nI0719 12:52:19.844842    3933 log.go:172] (0xc0009ce000) (0xc000808000) Stream added, broadcasting: 3\nI0719 12:52:19.845872    3933 log.go:172] (0xc0009ce000) Reply frame received for 3\nI0719 12:52:19.845893    3933 log.go:172] (0xc0009ce000) (0xc0006b43c0) Create stream\nI0719 12:52:19.845899    3933 log.go:172] (0xc0009ce000) (0xc0006b43c0) Stream added, broadcasting: 5\nI0719 12:52:19.846654    3933 log.go:172] (0xc0009ce000) Reply frame received for 5\nI0719 12:52:19.909451    3933 log.go:172] (0xc0009ce000) Data frame received for 3\nI0719 12:52:19.909497    3933 log.go:172] (0xc000808000) (3) Data frame handling\nI0719 12:52:19.909523    3933 log.go:172] (0xc0009ce000) Data frame received for 5\nI0719 12:52:19.909535    3933 log.go:172] (0xc0006b43c0) (5) Data frame handling\nI0719 12:52:19.909545    3933 log.go:172] (0xc0006b43c0) (5) Data frame sent\nI0719 12:52:19.909551    3933 log.go:172] (0xc0009ce000) Data frame received for 5\nI0719 12:52:19.909555    3933 log.go:172] (0xc0006b43c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 31033\nConnection to 172.18.0.10 31033 port [tcp/31033] succeeded!\nI0719 12:52:19.910882    3933 log.go:172] (0xc0009ce000) Data frame received for 1\nI0719 12:52:19.910918    3933 log.go:172] (0xc0006b40a0) (1) Data frame handling\nI0719 12:52:19.910932    3933 log.go:172] (0xc0006b40a0) (1) Data frame sent\nI0719 12:52:19.910945    3933 log.go:172] (0xc0009ce000) (0xc0006b40a0) Stream removed, broadcasting: 1\nI0719 12:52:19.910955    3933 log.go:172] (0xc0009ce000) Go away received\nI0719 12:52:19.911343    3933 log.go:172] (0xc0009ce000) (0xc0006b40a0) Stream removed, broadcasting: 1\nI0719 12:52:19.911359    3933 log.go:172] (0xc0009ce000) (0xc000808000) Stream removed, broadcasting: 3\nI0719 12:52:19.911364    3933 log.go:172] (0xc0009ce000) (0xc0006b43c0) Stream removed, broadcasting: 5\n"
Jul 19 12:52:19.915: INFO: stdout: ""
Jul 19 12:52:19.915: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:52:20.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6769" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.426 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":268,"skipped":4266,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:52:21.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:52:52.096: INFO: Container started at 2020-07-19 12:52:27 +0000 UTC, pod became ready at 2020-07-19 12:52:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:52:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5933" for this suite.

• [SLOW TEST:30.991 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4350,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:52:52.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:53:01.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-988" for this suite.
STEP: Destroying namespace "nsdeletetest-7647" for this suite.
Jul 19 12:53:01.931: INFO: Namespace nsdeletetest-7647 was already deleted
STEP: Destroying namespace "nsdeletetest-3196" for this suite.

• [SLOW TEST:9.770 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":270,"skipped":4394,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:53:01.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 19 12:53:02.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul 19 12:53:02.372: INFO: stderr: ""
Jul 19 12:53:02.372: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.8\", GitCommit:\"35dc4cdc26cfcb6614059c4c6e836e5f0dc61dee\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:52:59Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:53:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6906" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":271,"skipped":4438,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:53:02.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gn6n7 in namespace proxy-6632
I0719 12:53:02.702184       6 runners.go:189] Created replication controller with name: proxy-service-gn6n7, namespace: proxy-6632, replica count: 1
I0719 12:53:03.752682       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:04.752948       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:05.753192       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:06.753449       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:07.753670       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:08.753892       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:09.754106       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:10.754428       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:11.754689       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:12.754971       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:13.755178       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:14.755380       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:15.755560       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:16.755761       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0719 12:53:17.756044       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0719 12:53:18.756253       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0719 12:53:19.756487       6 runners.go:189] proxy-service-gn6n7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 19 12:53:19.759: INFO: setup took 17.117744678s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 19 12:53:19.764: INFO: (0) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 4.39704ms)
Jul 19 12:53:19.764: INFO: (0) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.411181ms)
Jul 19 12:53:19.764: INFO: (0) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.616552ms)
Jul 19 12:53:19.764: INFO: (0) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.584079ms)
Jul 19 12:53:19.764: INFO: (0) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.813264ms)
Jul 19 12:53:19.766: INFO: (0) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 6.845403ms)
Jul 19 12:53:19.766: INFO: (0) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 6.84023ms)
Jul 19 12:53:19.766: INFO: (0) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 6.901589ms)
Jul 19 12:53:19.766: INFO: (0) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 6.914191ms)
Jul 19 12:53:19.766: INFO: (0) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 7.365181ms)
Jul 19 12:53:19.767: INFO: (0) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 7.758512ms)
Jul 19 12:53:19.770: INFO: (0) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 11.215031ms)
Jul 19 12:53:19.770: INFO: (0) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 11.169319ms)
Jul 19 12:53:19.771: INFO: (0) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 11.815558ms)
Jul 19 12:53:19.771: INFO: (0) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 12.117157ms)
Jul 19 12:53:19.771: INFO: (0) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 5.630756ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: ... (200; 5.717437ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 5.733691ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 5.706439ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 5.978972ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 6.168284ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 6.102487ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 6.091964ms)
Jul 19 12:53:19.777: INFO: (1) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 6.145833ms)
Jul 19 12:53:19.778: INFO: (1) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 6.4652ms)
Jul 19 12:53:19.778: INFO: (1) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 6.545844ms)
Jul 19 12:53:19.780: INFO: (2) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 2.11735ms)
Jul 19 12:53:19.781: INFO: (2) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.237753ms)
Jul 19 12:53:19.782: INFO: (2) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.844392ms)
Jul 19 12:53:19.782: INFO: (2) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 3.983938ms)
Jul 19 12:53:19.782: INFO: (2) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.333351ms)
Jul 19 12:53:19.782: INFO: (2) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.474258ms)
Jul 19 12:53:19.783: INFO: (2) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.531989ms)
Jul 19 12:53:19.783: INFO: (2) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 4.618289ms)
Jul 19 12:53:19.783: INFO: (2) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 4.984918ms)
Jul 19 12:53:19.783: INFO: (2) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: ... (200; 5.134536ms)
Jul 19 12:53:19.783: INFO: (2) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 5.182278ms)
Jul 19 12:53:19.785: INFO: (3) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 1.973542ms)
Jul 19 12:53:19.787: INFO: (3) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.602125ms)
Jul 19 12:53:19.787: INFO: (3) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.57957ms)
Jul 19 12:53:19.788: INFO: (3) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 5.068404ms)
Jul 19 12:53:19.788: INFO: (3) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 5.076131ms)
Jul 19 12:53:19.788: INFO: (3) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 5.140603ms)
Jul 19 12:53:19.788: INFO: (3) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 5.296478ms)
Jul 19 12:53:19.789: INFO: (3) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 5.50181ms)
Jul 19 12:53:19.789: INFO: (3) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 5.534319ms)
Jul 19 12:53:19.789: INFO: (3) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 5.556115ms)
Jul 19 12:53:19.789: INFO: (3) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test<... (200; 5.182934ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 5.216514ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 5.238605ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 5.308651ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 5.481711ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 5.579196ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 5.486822ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 5.55ms)
Jul 19 12:53:19.795: INFO: (4) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 5.84789ms)
Jul 19 12:53:19.796: INFO: (4) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test<... (200; 4.026554ms)
Jul 19 12:53:19.805: INFO: (5) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 4.087367ms)
Jul 19 12:53:19.805: INFO: (5) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 4.198572ms)
Jul 19 12:53:19.805: INFO: (5) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.288061ms)
Jul 19 12:53:19.805: INFO: (5) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.294926ms)
Jul 19 12:53:19.805: INFO: (5) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: ... (200; 4.34988ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 4.334129ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 4.431825ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.38699ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.381968ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.396175ms)
Jul 19 12:53:19.811: INFO: (6) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 4.451393ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 2.687009ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 2.997328ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 3.020988ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 3.041756ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.221031ms)
Jul 19 12:53:19.814: INFO: (7) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.238777ms)
Jul 19 12:53:19.815: INFO: (7) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.305744ms)
Jul 19 12:53:19.815: INFO: (7) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.315343ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.328079ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.400563ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 4.473547ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.391874ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 4.470986ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 4.461519ms)
Jul 19 12:53:19.816: INFO: (7) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.638898ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 2.608293ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.163605ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 3.095436ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.132591ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.197519ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.190464ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 3.211109ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.177728ms)
Jul 19 12:53:19.819: INFO: (8) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 3.098367ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.338176ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test<... (200; 3.345059ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 3.59105ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.639418ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.687719ms)
Jul 19 12:53:19.824: INFO: (9) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.702113ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.142207ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 4.057341ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 4.400773ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.510284ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.518577ms)
Jul 19 12:53:19.825: INFO: (9) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.480707ms)
Jul 19 12:53:19.826: INFO: (9) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 5.427238ms)
Jul 19 12:53:19.826: INFO: (9) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 5.45561ms)
Jul 19 12:53:19.828: INFO: (10) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 1.694955ms)
Jul 19 12:53:19.829: INFO: (10) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 2.985367ms)
Jul 19 12:53:19.829: INFO: (10) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.018043ms)
Jul 19 12:53:19.829: INFO: (10) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.094702ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 3.956783ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.957108ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 3.950794ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.989883ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.999141ms)
Jul 19 12:53:19.830: INFO: (10) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.12524ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.252595ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.352247ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 4.310961ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.394475ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 4.434789ms)
Jul 19 12:53:19.831: INFO: (10) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: ... (200; 2.736388ms)
Jul 19 12:53:19.834: INFO: (11) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 2.691839ms)
Jul 19 12:53:19.834: INFO: (11) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 2.913685ms)
Jul 19 12:53:19.834: INFO: (11) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.164867ms)
Jul 19 12:53:19.834: INFO: (11) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.481901ms)
Jul 19 12:53:19.834: INFO: (11) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.541591ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test<... (200; 3.791322ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 4.074994ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 4.210192ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.268645ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.240368ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.288955ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 4.278104ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.364742ms)
Jul 19 12:53:19.835: INFO: (11) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.357106ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.454562ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.657215ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.851801ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.825781ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.986263ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 4.116338ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 4.098257ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.082524ms)
Jul 19 12:53:19.839: INFO: (12) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 4.063223ms)
Jul 19 12:53:19.840: INFO: (12) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 4.100928ms)
Jul 19 12:53:19.840: INFO: (12) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.560537ms)
Jul 19 12:53:19.840: INFO: (12) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 4.60838ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.666555ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 4.642116ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.692468ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: ... (200; 4.6653ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 4.833524ms)
Jul 19 12:53:19.846: INFO: (13) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.901542ms)
Jul 19 12:53:19.847: INFO: (13) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 5.639705ms)
Jul 19 12:53:19.850: INFO: (14) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.734684ms)
Jul 19 12:53:19.851: INFO: (14) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 5.780017ms)
Jul 19 12:53:19.852: INFO: (14) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 5.729087ms)
Jul 19 12:53:19.853: INFO: (14) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 5.878401ms)
Jul 19 12:53:19.853: INFO: (14) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 6.270267ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 8.413479ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 8.50561ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 8.458772ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 8.440142ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 8.458588ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 8.469653ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 8.578822ms)
Jul 19 12:53:19.855: INFO: (14) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 8.639419ms)
Jul 19 12:53:19.856: INFO: (14) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 8.765227ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.344292ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.506259ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.774904ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.698828ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.767927ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.842524ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 3.827315ms)
Jul 19 12:53:19.859: INFO: (15) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.823361ms)
Jul 19 12:53:19.860: INFO: (15) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 4.559975ms)
Jul 19 12:53:19.860: INFO: (15) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname1/proxy/: foo (200; 4.581218ms)
Jul 19 12:53:19.861: INFO: (15) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname2/proxy/: bar (200; 4.94129ms)
Jul 19 12:53:19.861: INFO: (15) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.881935ms)
Jul 19 12:53:19.861: INFO: (15) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname1/proxy/: tls baz (200; 5.016195ms)
Jul 19 12:53:19.861: INFO: (15) /api/v1/namespaces/proxy-6632/services/https:proxy-service-gn6n7:tlsportname2/proxy/: tls qux (200; 5.282256ms)
Jul 19 12:53:19.861: INFO: (15) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 5.30823ms)
Jul 19 12:53:19.863: INFO: (16) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 1.976305ms)
Jul 19 12:53:19.864: INFO: (16) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 2.359125ms)
Jul 19 12:53:19.864: INFO: (16) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 2.40816ms)
Jul 19 12:53:19.864: INFO: (16) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 2.518351ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t/proxy/: test (200; 3.299406ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.357347ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.329317ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 3.491042ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.665333ms)
Jul 19 12:53:19.865: INFO: (16) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 2.142295ms)
Jul 19 12:53:19.868: INFO: (17) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 2.308573ms)
Jul 19 12:53:19.868: INFO: (17) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 2.3491ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.917223ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/services/http:proxy-service-gn6n7:portname1/proxy/: foo (200; 4.104ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/services/proxy-service-gn6n7:portname2/proxy/: bar (200; 4.152832ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.335557ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 4.413947ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.343397ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 4.400827ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 4.3858ms)
Jul 19 12:53:19.870: INFO: (17) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 3.425553ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 3.422422ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 3.503248ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.528574ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 3.526366ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 3.534859ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 3.56107ms)
Jul 19 12:53:19.874: INFO: (18) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: test (200; 1.908219ms)
Jul 19 12:53:19.877: INFO: (19) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:1080/proxy/: test<... (200; 1.93554ms)
Jul 19 12:53:19.877: INFO: (19) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 1.976168ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/http:proxy-service-gn6n7-z7v5t:1080/proxy/: ... (200; 2.648282ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:160/proxy/: foo (200; 2.784849ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:462/proxy/: tls qux (200; 2.984431ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/proxy-service-gn6n7-z7v5t:162/proxy/: bar (200; 2.95065ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:460/proxy/: tls baz (200; 2.959896ms)
Jul 19 12:53:19.878: INFO: (19) /api/v1/namespaces/proxy-6632/pods/https:proxy-service-gn6n7-z7v5t:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-ff65b7ae-79e8-4bbd-8ac4-f60506599c28
STEP: Creating secret with name secret-projected-all-test-volume-4e97eea0-5546-4309-a432-88b13bce1f1e
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 19 12:53:28.744: INFO: Waiting up to 5m0s for pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03" in namespace "projected-9700" to be "success or failure"
Jul 19 12:53:28.759: INFO: Pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03": Phase="Pending", Reason="", readiness=false. Elapsed: 14.892402ms
Jul 19 12:53:30.928: INFO: Pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183887773s
Jul 19 12:53:33.341: INFO: Pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597078223s
Jul 19 12:53:35.516: INFO: Pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.772436886s
STEP: Saw pod success
Jul 19 12:53:35.516: INFO: Pod "projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03" satisfied condition "success or failure"
Jul 19 12:53:35.534: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03 container projected-all-volume-test: 
STEP: delete the pod
Jul 19 12:53:35.886: INFO: Waiting for pod projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03 to disappear
Jul 19 12:53:35.905: INFO: Pod projected-volume-ea65cfe2-8cf8-4940-a9da-6067b1411f03 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:53:35.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9700" for this suite.

• [SLOW TEST:8.044 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4482,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:53:35.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 19 12:53:36.162: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432454 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 19 12:53:36.163: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432454 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 19 12:53:46.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432490 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 19 12:53:46.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432490 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 19 12:53:56.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432520 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 19 12:53:56.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432520 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 19 12:54:06.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432550 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 19 12:54:06.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-a 82d24378-68cc-407f-8ad0-d1782d6fa2bf 2432550 0 2020-07-19 12:53:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 19 12:54:16.248: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-b fb472fb6-447b-477f-910b-8bc92f1321cd 2432577 0 2020-07-19 12:54:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 19 12:54:16.248: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-b fb472fb6-447b-477f-910b-8bc92f1321cd 2432577 0 2020-07-19 12:54:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 19 12:54:26.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-b fb472fb6-447b-477f-910b-8bc92f1321cd 2432606 0 2020-07-19 12:54:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 19 12:54:26.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9983 /api/v1/namespaces/watch-9983/configmaps/e2e-watch-test-configmap-b fb472fb6-447b-477f-910b-8bc92f1321cd 2432606 0 2020-07-19 12:54:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:54:36.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9983" for this suite.

• [SLOW TEST:60.412 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":274,"skipped":4542,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:54:36.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-2bss
STEP: Creating a pod to test atomic-volume-subpath
Jul 19 12:54:37.342: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2bss" in namespace "subpath-4449" to be "success or failure"
Jul 19 12:54:37.761: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Pending", Reason="", readiness=false. Elapsed: 419.233121ms
Jul 19 12:54:39.829: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48728327s
Jul 19 12:54:42.330: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.988723228s
Jul 19 12:54:45.043: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Pending", Reason="", readiness=false. Elapsed: 7.70086269s
Jul 19 12:54:47.046: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Pending", Reason="", readiness=false. Elapsed: 9.704267976s
Jul 19 12:54:49.050: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 11.707934031s
Jul 19 12:54:51.081: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 13.739127018s
Jul 19 12:54:53.236: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 15.893752904s
Jul 19 12:54:55.239: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 17.896943968s
Jul 19 12:54:57.242: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 19.90021394s
Jul 19 12:54:59.246: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 21.904420565s
Jul 19 12:55:01.250: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 23.908472499s
Jul 19 12:55:03.254: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 25.912025654s
Jul 19 12:55:05.295: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 27.952782524s
Jul 19 12:55:07.298: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Running", Reason="", readiness=true. Elapsed: 29.956084537s
Jul 19 12:55:09.302: INFO: Pod "pod-subpath-test-configmap-2bss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.959933993s
STEP: Saw pod success
Jul 19 12:55:09.302: INFO: Pod "pod-subpath-test-configmap-2bss" satisfied condition "success or failure"
Jul 19 12:55:09.304: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-2bss container test-container-subpath-configmap-2bss: 
STEP: delete the pod
Jul 19 12:55:09.697: INFO: Waiting for pod pod-subpath-test-configmap-2bss to disappear
Jul 19 12:55:09.790: INFO: Pod pod-subpath-test-configmap-2bss no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2bss
Jul 19 12:55:09.790: INFO: Deleting pod "pod-subpath-test-configmap-2bss" in namespace "subpath-4449"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:55:09.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4449" for this suite.

• [SLOW TEST:33.783 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":275,"skipped":4547,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:55:10.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul 19 12:55:10.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:55:26.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7778" for this suite.

• [SLOW TEST:16.087 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":276,"skipped":4551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 19 12:55:26.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 19 12:55:32.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4590" for this suite.

• [SLOW TEST:6.163 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4560,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSJul 19 12:55:32.359: INFO: Running AfterSuite actions on all nodes
Jul 19 12:55:32.359: INFO: Running AfterSuite actions on node 1
Jul 19 12:55:32.359: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410

Ran 278 of 4843 Specs in 5421.566 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (5421.84s)
FAIL