I0402 21:07:18.490479 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0402 21:07:18.490695 6 e2e.go:109] Starting e2e run "cb535d20-9b2e-4664-870d-63279a155206" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585861637 - Will randomize all specs Will run 278 of 4843 specs Apr 2 21:07:18.550: INFO: >>> kubeConfig: /root/.kube/config Apr 2 21:07:18.552: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 2 21:07:18.572: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 2 21:07:18.601: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 2 21:07:18.601: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 2 21:07:18.601: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 2 21:07:18.612: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 2 21:07:18.612: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 2 21:07:18.612: INFO: e2e test version: v1.17.3 Apr 2 21:07:18.613: INFO: kube-apiserver version: v1.17.2 Apr 2 21:07:18.613: INFO: >>> kubeConfig: /root/.kube/config Apr 2 21:07:18.618: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:07:18.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi Apr 2 21:07:18.719: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:07:18.722: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 21:07:21.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1101 create -f -' Apr 2 21:07:24.598: INFO: stderr: "" Apr 2 21:07:24.598: INFO: stdout: "e2e-test-crd-publish-openapi-5970-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 21:07:24.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1101 delete e2e-test-crd-publish-openapi-5970-crds test-cr' Apr 2 21:07:24.687: INFO: stderr: "" Apr 2 21:07:24.687: INFO: stdout: "e2e-test-crd-publish-openapi-5970-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 2 21:07:24.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1101 apply -f -' Apr 2 21:07:24.913: INFO: stderr: "" Apr 2 21:07:24.913: INFO: stdout: "e2e-test-crd-publish-openapi-5970-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 21:07:24.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1101 delete e2e-test-crd-publish-openapi-5970-crds test-cr' Apr 2 21:07:25.034: INFO: stderr: "" Apr 2 21:07:25.034: INFO: stdout: "e2e-test-crd-publish-openapi-5970-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 21:07:25.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5970-crds' Apr 2 21:07:25.273: INFO: stderr: "" Apr 2 21:07:25.273: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5970-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:07:28.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1101" for this suite. • [SLOW TEST:9.561 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":1,"skipped":20,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:07:28.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3185.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3185.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3185.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3185.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.243.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.243.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.243.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.243.197_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3185.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3185.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3185.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3185.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3185.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3185.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.243.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.243.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.243.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.243.197_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:07:40.398: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.400: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.402: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.415: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.420: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:40.431: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:07:45.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.444: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.463: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.466: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.468: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:45.480: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:07:50.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.443: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.458: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.482: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.485: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:50.547: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:07:55.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.443: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.457: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.464: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:07:55.478: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:08:00.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.439: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.441: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.458: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.465: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:00.500: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:08:05.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.467: INFO: Unable to read jessie_udp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.469: INFO: Unable to read jessie_tcp@dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local from pod dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4: the server could not find the requested resource (get pods dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4) Apr 2 21:08:05.482: INFO: Lookups using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 failed for: [wheezy_udp@dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@dns-test-service.dns-3185.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_udp@dns-test-service.dns-3185.svc.cluster.local jessie_tcp@dns-test-service.dns-3185.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3185.svc.cluster.local] Apr 2 21:08:10.509: INFO: DNS probes using dns-3185/dns-test-c910b6c0-3de1-4606-9842-d9c73dfafec4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:11.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3185" for this suite. • [SLOW TEST:44.020 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":2,"skipped":24,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:12.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b083d75d-c8f0-4236-ad28-c3372c8c64b5 STEP: Creating a pod to test consume secrets Apr 2 21:08:12.865: INFO: Waiting up to 5m0s for pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd" in namespace "secrets-3492" to be "success or failure" Apr 2 21:08:13.063: INFO: Pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 198.556251ms Apr 2 21:08:15.067: INFO: Pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20214017s Apr 2 21:08:17.123: INFO: Pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd": Phase="Running", Reason="", readiness=true. Elapsed: 4.258368867s Apr 2 21:08:19.127: INFO: Pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262135036s STEP: Saw pod success Apr 2 21:08:19.127: INFO: Pod "pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd" satisfied condition "success or failure" Apr 2 21:08:19.130: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd container secret-volume-test: STEP: delete the pod Apr 2 21:08:19.163: INFO: Waiting for pod pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd to disappear Apr 2 21:08:19.183: INFO: Pod pod-secrets-ae9abb82-e369-4c1f-a52a-d97a660ac1fd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:19.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3492" for this suite. • [SLOW TEST:6.989 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":33,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:19.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 2 21:08:19.242: INFO: Waiting up to 5m0s for pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d" in namespace "emptydir-9460" to be "success or failure" Apr 2 21:08:19.263: INFO: Pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.089382ms Apr 2 21:08:21.267: INFO: Pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024922238s Apr 2 21:08:23.270: INFO: Pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02802497s Apr 2 21:08:25.274: INFO: Pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03182335s STEP: Saw pod success Apr 2 21:08:25.274: INFO: Pod "pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d" satisfied condition "success or failure" Apr 2 21:08:25.276: INFO: Trying to get logs from node jerma-worker pod pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d container test-container: STEP: delete the pod Apr 2 21:08:25.298: INFO: Waiting for pod pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d to disappear Apr 2 21:08:25.316: INFO: Pod pod-64b6e7a2-0a55-4310-8173-ea39763a7d4d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:25.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9460" for this suite. • [SLOW TEST:6.133 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:25.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-33d79bfa-5ef6-413d-a86d-247e3a123444 STEP: Creating a pod to test consume secrets Apr 2 21:08:25.399: INFO: Waiting up to 5m0s for pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984" in namespace "secrets-7477" to be "success or failure" Apr 2 21:08:25.403: INFO: Pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265376ms Apr 2 21:08:27.745: INFO: Pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346360614s Apr 2 21:08:29.749: INFO: Pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984": Phase="Running", Reason="", readiness=true. Elapsed: 4.3499465s Apr 2 21:08:31.756: INFO: Pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.356929706s STEP: Saw pod success Apr 2 21:08:31.756: INFO: Pod "pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984" satisfied condition "success or failure" Apr 2 21:08:31.763: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984 container secret-env-test: STEP: delete the pod Apr 2 21:08:31.843: INFO: Waiting for pod pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984 to disappear Apr 2 21:08:31.859: INFO: Pod pod-secrets-d01bc65b-3012-47c5-901f-bacc84008984 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:31.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7477" for this suite. • [SLOW TEST:6.541 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:31.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 2 21:08:31.980: INFO: Waiting up to 5m0s for pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83" in namespace "emptydir-6334" to be "success or failure" Apr 2 21:08:31.985: INFO: Pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171134ms Apr 2 21:08:34.022: INFO: Pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041732509s Apr 2 21:08:36.026: INFO: Pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83": Phase="Running", Reason="", readiness=true. Elapsed: 4.045204949s Apr 2 21:08:38.029: INFO: Pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048890812s STEP: Saw pod success Apr 2 21:08:38.029: INFO: Pod "pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83" satisfied condition "success or failure" Apr 2 21:08:38.032: INFO: Trying to get logs from node jerma-worker2 pod pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83 container test-container: STEP: delete the pod Apr 2 21:08:38.082: INFO: Waiting for pod pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83 to disappear Apr 2 21:08:38.084: INFO: Pod pod-8b022fd1-de3a-4b0b-832b-af18f4e53c83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6334" for this suite. • [SLOW TEST:6.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":83,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:38.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-c177c39f-abd4-476e-9a84-e5b37c967bdc STEP: Creating a pod to test consume configMaps Apr 2 21:08:38.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71" in namespace "configmap-214" to be "success or failure" Apr 2 21:08:38.181: INFO: Pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71": Phase="Pending", Reason="", readiness=false. Elapsed: 34.364817ms Apr 2 21:08:40.478: INFO: Pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331332426s Apr 2 21:08:42.487: INFO: Pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339768339s Apr 2 21:08:44.491: INFO: Pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.343751918s STEP: Saw pod success Apr 2 21:08:44.491: INFO: Pod "pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71" satisfied condition "success or failure" Apr 2 21:08:44.494: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71 container configmap-volume-test: STEP: delete the pod Apr 2 21:08:44.537: INFO: Waiting for pod pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71 to disappear Apr 2 21:08:44.621: INFO: Pod pod-configmaps-ee7504e2-1951-4544-9d9e-0a6a361d1c71 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:08:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-214" for this suite. • [SLOW TEST:6.538 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:08:44.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:08:45.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:08:48.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458525, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:08:50.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458526, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458525, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:08:53.070: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:09:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-170" for this suite. STEP: Destroying namespace "webhook-170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.669 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":8,"skipped":136,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:09:03.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-f12cb2ca-3bd6-4f8d-8cfb-141115970bdd [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:09:03.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3680" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":9,"skipped":140,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:09:03.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-21f96bde-1a65-4c62-8ab0-fc789f539c2e in namespace container-probe-1858 Apr 2 21:09:07.474: INFO: Started pod test-webserver-21f96bde-1a65-4c62-8ab0-fc789f539c2e in namespace container-probe-1858 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 21:09:07.476: INFO: Initial restart count of pod test-webserver-21f96bde-1a65-4c62-8ab0-fc789f539c2e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:08.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1858" for this suite. • [SLOW TEST:244.813 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":142,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:08.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 2 21:13:08.450: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 2 21:13:13.469: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:13.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8737" for this suite. • [SLOW TEST:5.335 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":11,"skipped":154,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:13.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:13:13.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8" in namespace "downward-api-4040" to be "success or failure" Apr 2 21:13:13.684: INFO: Pod "downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.644884ms Apr 2 21:13:15.698: INFO: Pod "downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017388961s Apr 2 21:13:17.702: INFO: Pod "downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021428078s STEP: Saw pod success Apr 2 21:13:17.702: INFO: Pod "downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8" satisfied condition "success or failure" Apr 2 21:13:17.705: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8 container client-container: STEP: delete the pod Apr 2 21:13:17.769: INFO: Waiting for pod downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8 to disappear Apr 2 21:13:17.829: INFO: Pod downwardapi-volume-718a72c7-0d51-419c-a3dd-c3e0bb678bd8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:17.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4040" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":158,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:17.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912 Apr 2 21:13:17.899: INFO: Pod name my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912: Found 0 pods out of 1 Apr 2 21:13:22.903: INFO: Pod name my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912: Found 1 pods out of 1 Apr 2 21:13:22.903: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912" are running Apr 2 21:13:22.906: INFO: Pod "my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912-nmgmj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:13:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:13:21 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:13:21 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:13:17 +0000 UTC Reason: Message:}]) Apr 2 21:13:22.906: INFO: Trying to dial the pod Apr 2 21:13:27.918: INFO: Controller my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912: Got expected result from replica 1 [my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912-nmgmj]: "my-hostname-basic-0c1b6f6c-016f-4946-aecc-ecc92d999912-nmgmj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:27.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9482" for this suite. • [SLOW TEST:10.091 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":13,"skipped":167,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:27.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 21:13:32.035: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:32.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9094" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:32.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9933225e-5ea3-4e57-9442-2eda45573cbb STEP: Creating a pod to test consume configMaps Apr 2 21:13:32.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac" in namespace "configmap-9519" to be "success or failure" Apr 2 21:13:32.254: INFO: Pod "pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac": Phase="Pending", Reason="", readiness=false. Elapsed: 31.689847ms Apr 2 21:13:34.258: INFO: Pod "pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035386793s Apr 2 21:13:36.262: INFO: Pod "pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039459036s STEP: Saw pod success Apr 2 21:13:36.262: INFO: Pod "pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac" satisfied condition "success or failure" Apr 2 21:13:36.265: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac container configmap-volume-test: STEP: delete the pod Apr 2 21:13:36.327: INFO: Waiting for pod pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac to disappear Apr 2 21:13:36.337: INFO: Pod pod-configmaps-1655b9d3-d994-4693-88db-46900b6545ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:36.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9519" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":205,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:36.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 2 21:13:36.411: INFO: Waiting up to 5m0s for pod "pod-df509849-72cd-461f-9b18-487ae857fa2a" in namespace "emptydir-2838" to be "success or failure" Apr 2 21:13:36.421: INFO: Pod "pod-df509849-72cd-461f-9b18-487ae857fa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276621ms Apr 2 21:13:38.431: INFO: Pod "pod-df509849-72cd-461f-9b18-487ae857fa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019924681s Apr 2 21:13:40.435: INFO: Pod "pod-df509849-72cd-461f-9b18-487ae857fa2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023655582s STEP: Saw pod success Apr 2 21:13:40.435: INFO: Pod "pod-df509849-72cd-461f-9b18-487ae857fa2a" satisfied condition "success or failure" Apr 2 21:13:40.438: INFO: Trying to get logs from node jerma-worker2 pod pod-df509849-72cd-461f-9b18-487ae857fa2a container test-container: STEP: delete the pod Apr 2 21:13:40.591: INFO: Waiting for pod pod-df509849-72cd-461f-9b18-487ae857fa2a to disappear Apr 2 21:13:40.607: INFO: Pod pod-df509849-72cd-461f-9b18-487ae857fa2a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:13:40.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2838" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":212,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:13:40.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:11.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3653" for this suite. • [SLOW TEST:30.827 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":218,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:11.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 2 21:14:11.476: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:24.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3778" for this suite. • [SLOW TEST:13.235 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":18,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:24.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:14:28.824: INFO: Waiting up to 5m0s for pod "client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33" in namespace "pods-9261" to be "success or failure" Apr 2 21:14:28.827: INFO: Pod "client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.717286ms Apr 2 21:14:30.831: INFO: Pod "client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00701592s Apr 2 21:14:32.835: INFO: Pod "client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011061726s STEP: Saw pod success Apr 2 21:14:32.835: INFO: Pod "client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33" satisfied condition "success or failure" Apr 2 21:14:32.838: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33 container env3cont: STEP: delete the pod Apr 2 21:14:32.880: INFO: Waiting for pod client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33 to disappear Apr 2 21:14:32.884: INFO: Pod client-envvars-a189e277-06b0-4922-b8cc-01d10c3fbf33 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:32.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9261" for this suite. • [SLOW TEST:8.213 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":249,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:32.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:14:33.053: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"39651103-7cfe-4281-ad34-f2f553caa073", Controller:(*bool)(0xc001a6317a), BlockOwnerDeletion:(*bool)(0xc001a6317b)}} Apr 2 21:14:33.142: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a9c73ea4-ec36-4f5b-a7c9-68dfac44faa2", Controller:(*bool)(0xc003b74d4a), BlockOwnerDeletion:(*bool)(0xc003b74d4b)}} Apr 2 21:14:33.173: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"666f92c1-89fa-4611-9887-7ec38ddb773f", Controller:(*bool)(0xc003b74efa), BlockOwnerDeletion:(*bool)(0xc003b74efb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:38.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7869" for this suite. • [SLOW TEST:5.347 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":20,"skipped":260,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:38.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-818929cd-33d1-4184-9823-c24f78d06f06 STEP: Creating a pod to test consume secrets Apr 2 21:14:38.329: INFO: Waiting up to 5m0s for pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1" in namespace "secrets-1631" to be "success or failure" Apr 2 21:14:38.352: INFO: Pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.215171ms Apr 2 21:14:40.356: INFO: Pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02755873s Apr 2 21:14:42.360: INFO: Pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.03148647s Apr 2 21:14:44.364: INFO: Pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035848187s STEP: Saw pod success Apr 2 21:14:44.365: INFO: Pod "pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1" satisfied condition "success or failure" Apr 2 21:14:44.368: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1 container secret-volume-test: STEP: delete the pod Apr 2 21:14:44.388: INFO: Waiting for pod pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1 to disappear Apr 2 21:14:44.392: INFO: Pod pod-secrets-e522e368-9d6a-4dea-a0c8-70eb1fd3d5e1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:44.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1631" for this suite. • [SLOW TEST:6.160 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":265,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:44.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 2 21:14:44.522: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9551 /api/v1/namespaces/watch-9551/configmaps/e2e-watch-test-resource-version 45a04627-2191-4a60-b1a4-d3846c64d8b2 4845241 0 2020-04-02 21:14:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 2 21:14:44.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9551 /api/v1/namespaces/watch-9551/configmaps/e2e-watch-test-resource-version 45a04627-2191-4a60-b1a4-d3846c64d8b2 4845242 0 2020-04-02 21:14:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:14:44.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9551" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":22,"skipped":278,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:14:44.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0402 21:15:15.164820 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 21:15:15.164: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5852" for this suite. • [SLOW TEST:30.588 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":23,"skipped":286,"failed":0} [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:15.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:15:15.324: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ae23852b-2d9d-4dcb-952b-00d79755f9ab" in namespace "security-context-test-7779" to be "success or failure" Apr 2 21:15:15.333: INFO: Pod "busybox-user-65534-ae23852b-2d9d-4dcb-952b-00d79755f9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.430729ms Apr 2 21:15:17.358: INFO: Pod "busybox-user-65534-ae23852b-2d9d-4dcb-952b-00d79755f9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034180308s Apr 2 21:15:19.361: INFO: Pod "busybox-user-65534-ae23852b-2d9d-4dcb-952b-00d79755f9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037707117s Apr 2 21:15:19.361: INFO: Pod "busybox-user-65534-ae23852b-2d9d-4dcb-952b-00d79755f9ab" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:19.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7779" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":286,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:19.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 2 21:15:19.432: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 2 21:15:19.887: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 2 21:15:22.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458919, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458919, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458919, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458919, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:15:24.770: INFO: Waited 627.356098ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:25.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4071" for this suite. • [SLOW TEST:5.993 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":25,"skipped":299,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:25.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 2 21:15:25.618: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 21:15:25.628: INFO: Waiting for terminating namespaces to be deleted... Apr 2 21:15:25.630: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 2 21:15:25.635: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.635: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:15:25.635: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.635: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:15:25.635: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 2 21:15:25.640: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.640: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:15:25.640: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.640: INFO: Container kube-bench ready: false, restart count 0 Apr 2 21:15:25.640: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.640: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:15:25.640: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 2 21:15:25.640: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1b2ec605-0ed0-45ba-a57e-c682fc895330 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-1b2ec605-0ed0-45ba-a57e-c682fc895330 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b2ec605-0ed0-45ba-a57e-c682fc895330 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:42.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5179" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.722 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":26,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:42.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 2 21:15:46.735: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8447 pod-service-account-3726c00d-dc6c-4f03-8c14-278bbe6c2c92 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 2 21:15:46.974: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8447 pod-service-account-3726c00d-dc6c-4f03-8c14-278bbe6c2c92 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 2 21:15:47.192: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8447 pod-service-account-3726c00d-dc6c-4f03-8c14-278bbe6c2c92 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:47.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8447" for this suite. • [SLOW TEST:5.361 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":27,"skipped":319,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 2 21:15:48.343: INFO: created pod pod-service-account-defaultsa Apr 2 21:15:48.343: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 2 21:15:48.347: INFO: created pod pod-service-account-mountsa Apr 2 21:15:48.347: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 2 21:15:48.384: INFO: created pod pod-service-account-nomountsa Apr 2 21:15:48.384: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 2 21:15:48.468: INFO: created pod pod-service-account-defaultsa-mountspec Apr 2 21:15:48.468: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 2 21:15:48.628: INFO: created pod pod-service-account-mountsa-mountspec Apr 2 21:15:48.628: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 2 21:15:48.666: INFO: created pod pod-service-account-nomountsa-mountspec Apr 2 21:15:48.666: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 2 21:15:48.709: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 2 21:15:48.709: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 2 21:15:48.826: INFO: created pod pod-service-account-mountsa-nomountspec Apr 2 21:15:48.826: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 2 21:15:48.848: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 2 21:15:48.848: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:15:48.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2895" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":28,"skipped":332,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:15:49.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 21:15:49.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7028' Apr 2 21:15:49.713: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 2 21:15:49.713: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 2 21:15:49.745: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 2 21:15:49.798: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 2 21:15:49.905: INFO: scanned /root for discovery docs: Apr 2 21:15:49.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7028' Apr 2 21:16:12.209: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 2 21:16:12.209: INFO: stdout: "Created e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc\nScaling up e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 2 21:16:12.209: INFO: stdout: "Created e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc\nScaling up e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 2 21:16:12.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7028' Apr 2 21:16:12.303: INFO: stderr: "" Apr 2 21:16:12.304: INFO: stdout: "e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc-9vnd6 " Apr 2 21:16:12.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc-9vnd6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7028' Apr 2 21:16:12.399: INFO: stderr: "" Apr 2 21:16:12.399: INFO: stdout: "true" Apr 2 21:16:12.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc-9vnd6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7028' Apr 2 21:16:12.498: INFO: stderr: "" Apr 2 21:16:12.498: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 2 21:16:12.498: INFO: e2e-test-httpd-rc-41dca67cc32786bdfe578e700c10e8bc-9vnd6 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Apr 2 21:16:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7028' Apr 2 21:16:12.618: INFO: stderr: "" Apr 2 21:16:12.618: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:12.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7028" for this suite. • [SLOW TEST:23.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":29,"skipped":335,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 2 21:16:12.752: INFO: Waiting up to 5m0s for pod "pod-5192b78a-d997-40d5-836c-cf512109c19f" in namespace "emptydir-3343" to be "success or failure" Apr 2 21:16:12.756: INFO: Pod "pod-5192b78a-d997-40d5-836c-cf512109c19f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607756ms Apr 2 21:16:14.765: INFO: Pod "pod-5192b78a-d997-40d5-836c-cf512109c19f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012959579s Apr 2 21:16:16.769: INFO: Pod "pod-5192b78a-d997-40d5-836c-cf512109c19f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017429259s STEP: Saw pod success Apr 2 21:16:16.769: INFO: Pod "pod-5192b78a-d997-40d5-836c-cf512109c19f" satisfied condition "success or failure" Apr 2 21:16:16.773: INFO: Trying to get logs from node jerma-worker2 pod pod-5192b78a-d997-40d5-836c-cf512109c19f container test-container: STEP: delete the pod Apr 2 21:16:16.792: INFO: Waiting for pod pod-5192b78a-d997-40d5-836c-cf512109c19f to disappear Apr 2 21:16:16.810: INFO: Pod pod-5192b78a-d997-40d5-836c-cf512109c19f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:16.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3343" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:16.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 2 21:16:23.958: INFO: 9 pods remaining Apr 2 21:16:23.958: INFO: 0 pods has nil DeletionTimestamp Apr 2 21:16:23.958: INFO: Apr 2 21:16:24.790: INFO: 0 pods remaining Apr 2 21:16:24.790: INFO: 0 pods has nil DeletionTimestamp Apr 2 21:16:24.790: INFO: STEP: Gathering metrics W0402 21:16:26.151068 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 21:16:26.151: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:26.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7455" for this suite. • [SLOW TEST:9.469 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":31,"skipped":377,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:26.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 2 21:16:30.993: INFO: &Pod{ObjectMeta:{send-events-7cf81cb8-2aa0-442b-a351-b0b35b421895 events-7717 /api/v1/namespaces/events-7717/pods/send-events-7cf81cb8-2aa0-442b-a351-b0b35b421895 d32af64b-ddc7-4f58-8b16-cce5ad2fa51c 4846157 0 2020-04-02 21:16:26 +0000 UTC map[name:foo time:540740799] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2q2rc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2q2rc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2q2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.92,StartTime:2020-04-02 21:16:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:16:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7068f5883eb695982507604de37c6034ff02e689b9e491f502c6a6305b2ad9ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 2 21:16:32.998: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 2 21:16:35.002: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:35.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7717" for this suite. • [SLOW TEST:8.733 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":32,"skipped":388,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:35.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:16:35.086: INFO: Creating deployment "test-recreate-deployment" Apr 2 21:16:35.090: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 2 21:16:35.127: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 2 21:16:37.135: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 2 21:16:37.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458995, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458995, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458995, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721458995, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:16:39.143: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 2 21:16:39.151: INFO: Updating deployment test-recreate-deployment Apr 2 21:16:39.151: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 2 21:16:39.523: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7574 /apis/apps/v1/namespaces/deployment-7574/deployments/test-recreate-deployment f6aae452-e2df-454e-9173-3d3c8c031134 4846288 2 2020-04-02 21:16:35 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fd20e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-02 21:16:39 +0000 UTC,LastTransitionTime:2020-04-02 21:16:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-02 21:16:39 +0000 UTC,LastTransitionTime:2020-04-02 21:16:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 2 21:16:39.596: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7574 /apis/apps/v1/namespaces/deployment-7574/replicasets/test-recreate-deployment-5f94c574ff 890de9d4-af43-4359-8f6a-c37281c1fe58 4846287 1 2020-04-02 21:16:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f6aae452-e2df-454e-9173-3d3c8c031134 0xc003fd2487 0xc003fd2488}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fd24e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:16:39.596: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 2 21:16:39.596: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7574 /apis/apps/v1/namespaces/deployment-7574/replicasets/test-recreate-deployment-799c574856 af27389a-7c12-4455-87b7-2b911f181716 4846277 2 2020-04-02 21:16:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f6aae452-e2df-454e-9173-3d3c8c031134 0xc003fd2557 0xc003fd2558}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fd25c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:16:39.600: INFO: Pod "test-recreate-deployment-5f94c574ff-lz5kn" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-lz5kn test-recreate-deployment-5f94c574ff- deployment-7574 /api/v1/namespaces/deployment-7574/pods/test-recreate-deployment-5f94c574ff-lz5kn c0b8094d-8a91-475b-add8-fa5f0d55bff5 4846289 0 2020-04-02 21:16:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 890de9d4-af43-4359-8f6a-c37281c1fe58 0xc003a3ec77 0xc003a3ec78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qnpz4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qnpz4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qnpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:16:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-02 21:16:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:39.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7574" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":33,"skipped":393,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:39.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:16:39.729: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0b7e7f23-eb2b-4d9b-b486-c9d31eb8ca87" in namespace "security-context-test-7379" to be "success or failure" Apr 2 21:16:39.746: INFO: Pod "alpine-nnp-false-0b7e7f23-eb2b-4d9b-b486-c9d31eb8ca87": Phase="Pending", Reason="", readiness=false. Elapsed: 16.900482ms Apr 2 21:16:41.783: INFO: Pod "alpine-nnp-false-0b7e7f23-eb2b-4d9b-b486-c9d31eb8ca87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054428948s Apr 2 21:16:43.788: INFO: Pod "alpine-nnp-false-0b7e7f23-eb2b-4d9b-b486-c9d31eb8ca87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058515053s Apr 2 21:16:43.788: INFO: Pod "alpine-nnp-false-0b7e7f23-eb2b-4d9b-b486-c9d31eb8ca87" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:43.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7379" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":407,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:43.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 2 21:16:44.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2556' Apr 2 21:16:44.431: INFO: stderr: "" Apr 2 21:16:44.431: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 21:16:44.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2556' Apr 2 21:16:44.567: INFO: stderr: "" Apr 2 21:16:44.567: INFO: stdout: "update-demo-nautilus-6lfrn update-demo-nautilus-6m9r9 " Apr 2 21:16:44.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lfrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2556' Apr 2 21:16:44.672: INFO: stderr: "" Apr 2 21:16:44.672: INFO: stdout: "" Apr 2 21:16:44.672: INFO: update-demo-nautilus-6lfrn is created but not running Apr 2 21:16:49.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2556' Apr 2 21:16:49.778: INFO: stderr: "" Apr 2 21:16:49.778: INFO: stdout: "update-demo-nautilus-6lfrn update-demo-nautilus-6m9r9 " Apr 2 21:16:49.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lfrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2556' Apr 2 21:16:49.871: INFO: stderr: "" Apr 2 21:16:49.871: INFO: stdout: "true" Apr 2 21:16:49.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lfrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2556' Apr 2 21:16:49.968: INFO: stderr: "" Apr 2 21:16:49.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 21:16:49.968: INFO: validating pod update-demo-nautilus-6lfrn Apr 2 21:16:49.971: INFO: got data: { "image": "nautilus.jpg" } Apr 2 21:16:49.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 21:16:49.971: INFO: update-demo-nautilus-6lfrn is verified up and running Apr 2 21:16:49.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m9r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2556' Apr 2 21:16:50.054: INFO: stderr: "" Apr 2 21:16:50.054: INFO: stdout: "true" Apr 2 21:16:50.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m9r9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2556' Apr 2 21:16:50.158: INFO: stderr: "" Apr 2 21:16:50.158: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 21:16:50.158: INFO: validating pod update-demo-nautilus-6m9r9 Apr 2 21:16:50.162: INFO: got data: { "image": "nautilus.jpg" } Apr 2 21:16:50.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 21:16:50.162: INFO: update-demo-nautilus-6m9r9 is verified up and running STEP: using delete to clean up resources Apr 2 21:16:50.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2556' Apr 2 21:16:50.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:16:50.269: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 2 21:16:50.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2556' Apr 2 21:16:50.386: INFO: stderr: "No resources found in kubectl-2556 namespace.\n" Apr 2 21:16:50.386: INFO: stdout: "" Apr 2 21:16:50.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2556 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 21:16:50.636: INFO: stderr: "" Apr 2 21:16:50.636: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:50.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2556" for this suite. • [SLOW TEST:6.843 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":35,"skipped":422,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:50.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 2 21:16:55.440: INFO: Successfully updated pod "annotationupdate6dc9f1b8-dd36-4b45-989f-be9db0e6f4fc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:16:57.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4875" for this suite. • [SLOW TEST:6.821 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":443,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:16:57.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:16:57.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4" in namespace "projected-9046" to be "success or failure" Apr 2 21:16:57.524: INFO: Pod "downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043291ms Apr 2 21:16:59.532: INFO: Pod "downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016165394s Apr 2 21:17:01.536: INFO: Pod "downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020084232s STEP: Saw pod success Apr 2 21:17:01.536: INFO: Pod "downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4" satisfied condition "success or failure" Apr 2 21:17:01.539: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4 container client-container: STEP: delete the pod Apr 2 21:17:01.595: INFO: Waiting for pod downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4 to disappear Apr 2 21:17:01.634: INFO: Pod downwardapi-volume-e6032684-87ef-4d53-b318-c5d9d23e20b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:17:01.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9046" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":452,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:17:01.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:17:01.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843" in namespace "projected-4298" to be "success or failure" Apr 2 21:17:01.698: INFO: Pod "downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843": Phase="Pending", Reason="", readiness=false. Elapsed: 3.326173ms Apr 2 21:17:03.796: INFO: Pod "downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101684259s Apr 2 21:17:05.800: INFO: Pod "downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105679454s STEP: Saw pod success Apr 2 21:17:05.800: INFO: Pod "downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843" satisfied condition "success or failure" Apr 2 21:17:05.803: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843 container client-container: STEP: delete the pod Apr 2 21:17:05.934: INFO: Waiting for pod downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843 to disappear Apr 2 21:17:05.942: INFO: Pod downwardapi-volume-d9bfce19-0b6c-4bfa-8cbf-f69ff5a00843 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:17:05.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4298" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:17:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 2 21:17:06.004: INFO: >>> kubeConfig: /root/.kube/config Apr 2 21:17:08.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:17:18.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9023" for this suite. • [SLOW TEST:13.043 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":39,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:17:18.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2905 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 21:17:19.036: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 2 21:17:43.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.96:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2905 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:17:43.224: INFO: >>> kubeConfig: /root/.kube/config I0402 21:17:43.248587 6 log.go:172] (0xc002354580) (0xc0028a9f40) Create stream I0402 21:17:43.248612 6 log.go:172] (0xc002354580) (0xc0028a9f40) Stream added, broadcasting: 1 I0402 21:17:43.250874 6 log.go:172] (0xc002354580) Reply frame received for 1 I0402 21:17:43.250929 6 log.go:172] (0xc002354580) (0xc002974460) Create stream I0402 21:17:43.250946 6 log.go:172] (0xc002354580) (0xc002974460) Stream added, broadcasting: 3 I0402 21:17:43.251854 6 log.go:172] (0xc002354580) Reply frame received for 3 I0402 21:17:43.251897 6 log.go:172] (0xc002354580) (0xc001c2ee60) Create stream I0402 21:17:43.251916 6 log.go:172] (0xc002354580) (0xc001c2ee60) Stream added, broadcasting: 5 I0402 21:17:43.252816 6 log.go:172] (0xc002354580) Reply frame received for 5 I0402 21:17:43.324610 6 log.go:172] (0xc002354580) Data frame received for 5 I0402 21:17:43.324712 6 log.go:172] (0xc001c2ee60) (5) Data frame handling I0402 21:17:43.324752 6 log.go:172] (0xc002354580) Data frame received for 3 I0402 21:17:43.324772 6 log.go:172] (0xc002974460) (3) Data frame handling I0402 21:17:43.324804 6 log.go:172] (0xc002974460) (3) Data frame sent I0402 21:17:43.324973 6 log.go:172] (0xc002354580) Data frame received for 3 I0402 21:17:43.325001 6 log.go:172] (0xc002974460) (3) Data frame handling I0402 21:17:43.326677 6 log.go:172] (0xc002354580) Data frame received for 1 I0402 21:17:43.326726 6 log.go:172] (0xc0028a9f40) (1) Data frame handling I0402 21:17:43.326743 6 log.go:172] (0xc0028a9f40) (1) Data frame sent I0402 21:17:43.326753 6 log.go:172] (0xc002354580) (0xc0028a9f40) Stream removed, broadcasting: 1 I0402 21:17:43.326761 6 log.go:172] (0xc002354580) Go away received I0402 21:17:43.327072 6 log.go:172] (0xc002354580) (0xc0028a9f40) Stream removed, broadcasting: 1 I0402 21:17:43.327085 6 log.go:172] (0xc002354580) (0xc002974460) Stream removed, broadcasting: 3 I0402 21:17:43.327091 6 log.go:172] (0xc002354580) (0xc001c2ee60) Stream removed, broadcasting: 5 Apr 2 21:17:43.327: INFO: Found all expected endpoints: [netserver-0] Apr 2 21:17:43.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.146:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2905 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:17:43.330: INFO: >>> kubeConfig: /root/.kube/config I0402 21:17:43.360460 6 log.go:172] (0xc00216e2c0) (0xc001eee820) Create stream I0402 21:17:43.360485 6 log.go:172] (0xc00216e2c0) (0xc001eee820) Stream added, broadcasting: 1 I0402 21:17:43.362644 6 log.go:172] (0xc00216e2c0) Reply frame received for 1 I0402 21:17:43.362680 6 log.go:172] (0xc00216e2c0) (0xc002974500) Create stream I0402 21:17:43.362693 6 log.go:172] (0xc00216e2c0) (0xc002974500) Stream added, broadcasting: 3 I0402 21:17:43.363339 6 log.go:172] (0xc00216e2c0) Reply frame received for 3 I0402 21:17:43.363365 6 log.go:172] (0xc00216e2c0) (0xc001eee8c0) Create stream I0402 21:17:43.363375 6 log.go:172] (0xc00216e2c0) (0xc001eee8c0) Stream added, broadcasting: 5 I0402 21:17:43.364006 6 log.go:172] (0xc00216e2c0) Reply frame received for 5 I0402 21:17:43.436731 6 log.go:172] (0xc00216e2c0) Data frame received for 3 I0402 21:17:43.436777 6 log.go:172] (0xc002974500) (3) Data frame handling I0402 21:17:43.436794 6 log.go:172] (0xc002974500) (3) Data frame sent I0402 21:17:43.436810 6 log.go:172] (0xc00216e2c0) Data frame received for 3 I0402 21:17:43.436821 6 log.go:172] (0xc002974500) (3) Data frame handling I0402 21:17:43.436873 6 log.go:172] (0xc00216e2c0) Data frame received for 5 I0402 21:17:43.436928 6 log.go:172] (0xc001eee8c0) (5) Data frame handling I0402 21:17:43.438438 6 log.go:172] (0xc00216e2c0) Data frame received for 1 I0402 21:17:43.438465 6 log.go:172] (0xc001eee820) (1) Data frame handling I0402 21:17:43.438485 6 log.go:172] (0xc001eee820) (1) Data frame sent I0402 21:17:43.438502 6 log.go:172] (0xc00216e2c0) (0xc001eee820) Stream removed, broadcasting: 1 I0402 21:17:43.438536 6 log.go:172] (0xc00216e2c0) Go away received I0402 21:17:43.438586 6 log.go:172] (0xc00216e2c0) (0xc001eee820) Stream removed, broadcasting: 1 I0402 21:17:43.438604 6 log.go:172] (0xc00216e2c0) (0xc002974500) Stream removed, broadcasting: 3 I0402 21:17:43.438612 6 log.go:172] (0xc00216e2c0) (0xc001eee8c0) Stream removed, broadcasting: 5 Apr 2 21:17:43.438: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:17:43.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2905" for this suite. • [SLOW TEST:24.454 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":528,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:17:43.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8bffa855-4a88-41d1-83f1-b560193f60ed in namespace container-probe-4875 Apr 2 21:17:47.558: INFO: Started pod liveness-8bffa855-4a88-41d1-83f1-b560193f60ed in namespace container-probe-4875 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 21:17:47.560: INFO: Initial restart count of pod liveness-8bffa855-4a88-41d1-83f1-b560193f60ed is 0 Apr 2 21:18:07.605: INFO: Restart count of pod container-probe-4875/liveness-8bffa855-4a88-41d1-83f1-b560193f60ed is now 1 (20.044973623s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:07.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4875" for this suite. • [SLOW TEST:24.182 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:07.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:18:08.162: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:18:10.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721459088, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721459088, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721459088, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721459088, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:18:13.189: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 2 21:18:17.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1310 to-be-attached-pod -i -c=container1' Apr 2 21:18:20.264: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:20.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1310" for this suite. STEP: Destroying namespace "webhook-1310-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.736 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":42,"skipped":587,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:20.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 2 21:18:20.468: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 2 21:18:20.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:21.075: INFO: stderr: "" Apr 2 21:18:21.075: INFO: stdout: "service/agnhost-slave created\n" Apr 2 21:18:21.075: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 2 21:18:21.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:21.575: INFO: stderr: "" Apr 2 21:18:21.575: INFO: stdout: "service/agnhost-master created\n" Apr 2 21:18:21.576: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 2 21:18:21.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:21.827: INFO: stderr: "" Apr 2 21:18:21.827: INFO: stdout: "service/frontend created\n" Apr 2 21:18:21.827: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 2 21:18:21.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:22.056: INFO: stderr: "" Apr 2 21:18:22.056: INFO: stdout: "deployment.apps/frontend created\n" Apr 2 21:18:22.056: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 2 21:18:22.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:22.314: INFO: stderr: "" Apr 2 21:18:22.314: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 2 21:18:22.314: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 2 21:18:22.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2501' Apr 2 21:18:22.605: INFO: stderr: "" Apr 2 21:18:22.605: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 2 21:18:22.605: INFO: Waiting for all frontend pods to be Running. Apr 2 21:18:32.656: INFO: Waiting for frontend to serve content. Apr 2 21:18:32.667: INFO: Trying to add a new entry to the guestbook. Apr 2 21:18:32.678: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 2 21:18:32.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:32.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:32.817: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 2 21:18:32.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:32.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:32.981: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 2 21:18:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:33.112: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:33.112: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 2 21:18:33.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:33.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:33.213: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 2 21:18:33.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:33.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:33.325: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 2 21:18:33.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2501' Apr 2 21:18:33.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:18:33.662: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:33.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2501" for this suite. • [SLOW TEST:13.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":43,"skipped":592,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:33.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:18:34.403: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:35.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6874" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":44,"skipped":606,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:35.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 2 21:18:35.889: INFO: Waiting up to 5m0s for pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9" in namespace "emptydir-1668" to be "success or failure" Apr 2 21:18:35.908: INFO: Pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.917406ms Apr 2 21:18:37.912: INFO: Pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022454507s Apr 2 21:18:39.916: INFO: Pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.026312397s Apr 2 21:18:41.920: INFO: Pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030362285s STEP: Saw pod success Apr 2 21:18:41.920: INFO: Pod "pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9" satisfied condition "success or failure" Apr 2 21:18:41.923: INFO: Trying to get logs from node jerma-worker pod pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9 container test-container: STEP: delete the pod Apr 2 21:18:41.966: INFO: Waiting for pod pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9 to disappear Apr 2 21:18:41.995: INFO: Pod pod-4e8e1d04-4520-4c67-ac05-cc6fadbeceb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1668" for this suite. • [SLOW TEST:6.192 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":610,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:42.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 2 21:18:42.064: INFO: Waiting up to 5m0s for pod "pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf" in namespace "emptydir-9860" to be "success or failure" Apr 2 21:18:42.067: INFO: Pod "pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.582176ms Apr 2 21:18:44.071: INFO: Pod "pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007208455s Apr 2 21:18:46.075: INFO: Pod "pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011162873s STEP: Saw pod success Apr 2 21:18:46.075: INFO: Pod "pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf" satisfied condition "success or failure" Apr 2 21:18:46.078: INFO: Trying to get logs from node jerma-worker2 pod pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf container test-container: STEP: delete the pod Apr 2 21:18:46.123: INFO: Waiting for pod pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf to disappear Apr 2 21:18:46.131: INFO: Pod pod-ac4de70e-a40c-4cb5-bf79-e11c447ef9cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:18:46.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9860" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":616,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:18:46.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4825 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4825;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4825 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4825;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4825.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4825.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4825.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4825.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4825.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4825.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4825.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.88.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.88.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.88.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.88.194_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4825 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4825;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4825 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4825;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4825.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4825.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4825.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4825.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4825.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4825.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4825.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4825.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4825.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.88.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.88.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.88.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.88.194_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:18:52.364: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.367: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.378: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.405: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.407: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.410: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.416: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.422: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.425: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:52.441: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:18:57.447: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.451: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.455: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.458: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.469: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.487: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.490: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.493: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.498: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.504: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:18:57.530: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:19:02.447: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.451: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.454: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.458: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.467: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.469: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.489: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.491: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.494: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.499: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:02.526: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:19:07.446: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.451: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.454: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.457: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.467: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.469: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.487: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.490: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.493: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.498: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.503: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.505: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:07.521: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:19:12.447: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.450: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.454: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.457: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.468: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.471: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.496: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.500: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.503: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.509: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.512: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.517: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:12.536: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:19:17.447: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.451: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.455: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.458: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.463: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.468: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.488: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.491: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.494: INFO: Unable to read jessie_udp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.497: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825 from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.500: INFO: Unable to read jessie_udp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.503: INFO: Unable to read jessie_tcp@dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc from pod dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5: the server could not find the requested resource (get pods dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5) Apr 2 21:19:17.526: INFO: Lookups using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4825 wheezy_tcp@dns-test-service.dns-4825 wheezy_udp@dns-test-service.dns-4825.svc wheezy_tcp@dns-test-service.dns-4825.svc wheezy_udp@_http._tcp.dns-test-service.dns-4825.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4825.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4825 jessie_tcp@dns-test-service.dns-4825 jessie_udp@dns-test-service.dns-4825.svc jessie_tcp@dns-test-service.dns-4825.svc jessie_udp@_http._tcp.dns-test-service.dns-4825.svc jessie_tcp@_http._tcp.dns-test-service.dns-4825.svc] Apr 2 21:19:22.525: INFO: DNS probes using dns-4825/dns-test-a0c83111-fae8-4f19-878c-8ca0d57c87b5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:23.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4825" for this suite. • [SLOW TEST:36.989 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":47,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:23.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:19:23.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6" in namespace "downward-api-9851" to be "success or failure" Apr 2 21:19:23.222: INFO: Pod "downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.696477ms Apr 2 21:19:25.247: INFO: Pod "downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034477338s Apr 2 21:19:27.265: INFO: Pod "downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05255211s STEP: Saw pod success Apr 2 21:19:27.265: INFO: Pod "downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6" satisfied condition "success or failure" Apr 2 21:19:27.267: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6 container client-container: STEP: delete the pod Apr 2 21:19:27.283: INFO: Waiting for pod downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6 to disappear Apr 2 21:19:27.288: INFO: Pod downwardapi-volume-7810324d-d053-4736-a54f-0d25ed9756c6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:27.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9851" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":651,"failed":0} SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:27.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 2 21:19:27.408: INFO: Waiting up to 5m0s for pod "downward-api-79389e68-fa3e-4675-b449-2806d7465352" in namespace "downward-api-7916" to be "success or failure" Apr 2 21:19:27.422: INFO: Pod "downward-api-79389e68-fa3e-4675-b449-2806d7465352": Phase="Pending", Reason="", readiness=false. Elapsed: 14.015967ms Apr 2 21:19:29.468: INFO: Pod "downward-api-79389e68-fa3e-4675-b449-2806d7465352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060532385s Apr 2 21:19:31.472: INFO: Pod "downward-api-79389e68-fa3e-4675-b449-2806d7465352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064494855s STEP: Saw pod success Apr 2 21:19:31.472: INFO: Pod "downward-api-79389e68-fa3e-4675-b449-2806d7465352" satisfied condition "success or failure" Apr 2 21:19:31.474: INFO: Trying to get logs from node jerma-worker pod downward-api-79389e68-fa3e-4675-b449-2806d7465352 container dapi-container: STEP: delete the pod Apr 2 21:19:31.510: INFO: Waiting for pod downward-api-79389e68-fa3e-4675-b449-2806d7465352 to disappear Apr 2 21:19:31.518: INFO: Pod downward-api-79389e68-fa3e-4675-b449-2806d7465352 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:31.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7916" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":653,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:31.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 2 21:19:31.610: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:31.626: INFO: Number of nodes with available pods: 0 Apr 2 21:19:31.626: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:19:32.631: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:32.635: INFO: Number of nodes with available pods: 0 Apr 2 21:19:32.635: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:19:33.657: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:33.659: INFO: Number of nodes with available pods: 0 Apr 2 21:19:33.659: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:19:34.632: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:34.648: INFO: Number of nodes with available pods: 0 Apr 2 21:19:34.648: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:19:35.633: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:35.638: INFO: Number of nodes with available pods: 2 Apr 2 21:19:35.638: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 2 21:19:35.665: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:19:35.680: INFO: Number of nodes with available pods: 2 Apr 2 21:19:35.680: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4153, will wait for the garbage collector to delete the pods Apr 2 21:19:36.781: INFO: Deleting DaemonSet.extensions daemon-set took: 10.988129ms Apr 2 21:19:37.081: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.328436ms Apr 2 21:19:40.885: INFO: Number of nodes with available pods: 0 Apr 2 21:19:40.885: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 21:19:40.891: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4153/daemonsets","resourceVersion":"4847638"},"items":null} Apr 2 21:19:40.893: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4153/pods","resourceVersion":"4847638"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:40.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4153" for this suite. • [SLOW TEST:9.403 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":50,"skipped":654,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 2 21:19:41.009: INFO: Waiting up to 5m0s for pod "client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9" in namespace "containers-3009" to be "success or failure" Apr 2 21:19:41.013: INFO: Pod "client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793005ms Apr 2 21:19:43.017: INFO: Pod "client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00777192s Apr 2 21:19:45.020: INFO: Pod "client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011542264s STEP: Saw pod success Apr 2 21:19:45.020: INFO: Pod "client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9" satisfied condition "success or failure" Apr 2 21:19:45.023: INFO: Trying to get logs from node jerma-worker pod client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9 container test-container: STEP: delete the pod Apr 2 21:19:45.062: INFO: Waiting for pod client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9 to disappear Apr 2 21:19:45.078: INFO: Pod client-containers-db859610-3253-4cf0-a9e5-bebd589fdaf9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:45.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3009" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":667,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:45.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:19:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8266" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:19:49.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d3f9ce96-5918-4b0d-9410-952a9fe55909 STEP: Creating secret with name s-test-opt-upd-4c99b13f-9ace-4ce3-8fc9-075e359a574c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d3f9ce96-5918-4b0d-9410-952a9fe55909 STEP: Updating secret s-test-opt-upd-4c99b13f-9ace-4ce3-8fc9-075e359a574c STEP: Creating secret with name s-test-opt-create-f28ddc9e-e45b-4aef-bdd4-f081255129a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:21:09.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5157" for this suite. • [SLOW TEST:80.529 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":698,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:21:09.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 2 21:21:09.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9348 /api/v1/namespaces/watch-9348/configmaps/e2e-watch-test-watch-closed 013ff3bd-50a0-4847-9468-e8cbb2ca8d54 4847999 0 2020-04-02 21:21:09 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 2 21:21:09.784: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9348 /api/v1/namespaces/watch-9348/configmaps/e2e-watch-test-watch-closed 013ff3bd-50a0-4847-9468-e8cbb2ca8d54 4848000 0 2020-04-02 21:21:09 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 2 21:21:09.795: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9348 /api/v1/namespaces/watch-9348/configmaps/e2e-watch-test-watch-closed 013ff3bd-50a0-4847-9468-e8cbb2ca8d54 4848001 0 2020-04-02 21:21:09 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 2 21:21:09.795: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9348 /api/v1/namespaces/watch-9348/configmaps/e2e-watch-test-watch-closed 013ff3bd-50a0-4847-9468-e8cbb2ca8d54 4848002 0 2020-04-02 21:21:09 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:21:09.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9348" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":54,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:21:09.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-3341591e-7382-4b63-905b-f1c70e878fb1 STEP: Creating a pod to test consume secrets Apr 2 21:21:09.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe" in namespace "projected-1814" to be "success or failure" Apr 2 21:21:09.914: INFO: Pod "pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 23.727327ms Apr 2 21:21:11.919: INFO: Pod "pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028072587s Apr 2 21:21:13.923: INFO: Pod "pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032504729s STEP: Saw pod success Apr 2 21:21:13.923: INFO: Pod "pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe" satisfied condition "success or failure" Apr 2 21:21:13.927: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe container secret-volume-test: STEP: delete the pod Apr 2 21:21:13.974: INFO: Waiting for pod pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe to disappear Apr 2 21:21:13.984: INFO: Pod pod-projected-secrets-e7f1142d-90a3-4d91-8ec7-f282f4a9bfbe no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:21:13.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1814" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":727,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:21:13.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:21:14.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4" in namespace "projected-5887" to be "success or failure" Apr 2 21:21:14.073: INFO: Pod "downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.939992ms Apr 2 21:21:16.093: INFO: Pod "downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040791188s Apr 2 21:21:18.098: INFO: Pod "downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045384687s STEP: Saw pod success Apr 2 21:21:18.098: INFO: Pod "downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4" satisfied condition "success or failure" Apr 2 21:21:18.101: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4 container client-container: STEP: delete the pod Apr 2 21:21:18.120: INFO: Waiting for pod downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4 to disappear Apr 2 21:21:18.125: INFO: Pod downwardapi-volume-0d566954-7563-427d-82c9-2e0f1731bdd4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:21:18.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5887" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":740,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:21:18.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 2 21:21:18.458: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 21:21:18.467: INFO: Waiting for terminating namespaces to be deleted... Apr 2 21:21:18.469: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 2 21:21:18.474: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.474: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:21:18.474: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.474: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:21:18.474: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 2 21:21:18.480: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.480: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:21:18.480: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.480: INFO: Container kube-bench ready: false, restart count 0 Apr 2 21:21:18.480: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.480: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:21:18.480: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 2 21:21:18.480: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cb076e22-5536-4677-9046-8880e90d01d7 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-cb076e22-5536-4677-9046-8880e90d01d7 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-cb076e22-5536-4677-9046-8880e90d01d7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:26:26.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9711" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.538 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":57,"skipped":740,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:26:26.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7084 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7084 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7084 Apr 2 21:26:26.813: INFO: Found 0 stateful pods, waiting for 1 Apr 2 21:26:36.817: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 2 21:26:36.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:26:37.092: INFO: stderr: "I0402 21:26:36.951054 845 log.go:172] (0xc0009226e0) (0xc00090a280) Create stream\nI0402 21:26:36.951102 845 log.go:172] (0xc0009226e0) (0xc00090a280) Stream added, broadcasting: 1\nI0402 21:26:36.953678 845 log.go:172] (0xc0009226e0) Reply frame received for 1\nI0402 21:26:36.953722 845 log.go:172] (0xc0009226e0) (0xc0005dc500) Create stream\nI0402 21:26:36.953738 845 log.go:172] (0xc0009226e0) (0xc0005dc500) Stream added, broadcasting: 3\nI0402 21:26:36.954929 845 log.go:172] (0xc0009226e0) Reply frame received for 3\nI0402 21:26:36.954972 845 log.go:172] (0xc0009226e0) (0xc00090a320) Create stream\nI0402 21:26:36.954998 845 log.go:172] (0xc0009226e0) (0xc00090a320) Stream added, broadcasting: 5\nI0402 21:26:36.955879 845 log.go:172] (0xc0009226e0) Reply frame received for 5\nI0402 21:26:37.058056 845 log.go:172] (0xc0009226e0) Data frame received for 5\nI0402 21:26:37.058097 845 log.go:172] (0xc00090a320) (5) Data frame handling\nI0402 21:26:37.058135 845 log.go:172] (0xc00090a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:26:37.084749 845 log.go:172] (0xc0009226e0) Data frame received for 3\nI0402 21:26:37.084781 845 log.go:172] (0xc0005dc500) (3) Data frame handling\nI0402 21:26:37.084821 845 log.go:172] (0xc0005dc500) (3) Data frame sent\nI0402 21:26:37.084851 845 log.go:172] (0xc0009226e0) Data frame received for 3\nI0402 21:26:37.084888 845 log.go:172] (0xc0005dc500) (3) Data frame handling\nI0402 21:26:37.084960 845 log.go:172] (0xc0009226e0) Data frame received for 5\nI0402 21:26:37.084983 845 log.go:172] (0xc00090a320) (5) Data frame handling\nI0402 21:26:37.087026 845 log.go:172] (0xc0009226e0) Data frame received for 1\nI0402 21:26:37.087050 845 log.go:172] (0xc00090a280) (1) Data frame handling\nI0402 21:26:37.087070 845 log.go:172] (0xc00090a280) (1) Data frame sent\nI0402 21:26:37.087088 845 log.go:172] (0xc0009226e0) (0xc00090a280) Stream removed, broadcasting: 1\nI0402 21:26:37.087240 845 log.go:172] (0xc0009226e0) Go away received\nI0402 21:26:37.087472 845 log.go:172] (0xc0009226e0) (0xc00090a280) Stream removed, broadcasting: 1\nI0402 21:26:37.087510 845 log.go:172] (0xc0009226e0) (0xc0005dc500) Stream removed, broadcasting: 3\nI0402 21:26:37.087541 845 log.go:172] (0xc0009226e0) (0xc00090a320) Stream removed, broadcasting: 5\n" Apr 2 21:26:37.092: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:26:37.092: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 21:26:37.096: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 2 21:26:47.101: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 21:26:47.101: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:26:47.115: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999495s Apr 2 21:26:48.120: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995235832s Apr 2 21:26:49.125: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990746262s Apr 2 21:26:50.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985484635s Apr 2 21:26:51.134: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98095797s Apr 2 21:26:52.139: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976267378s Apr 2 21:26:53.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971844194s Apr 2 21:26:54.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967187674s Apr 2 21:26:55.154: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.961874673s Apr 2 21:26:56.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.091003ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7084 Apr 2 21:26:57.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:26:57.409: INFO: stderr: "I0402 21:26:57.291933 865 log.go:172] (0xc000b466e0) (0xc0006bd9a0) Create stream\nI0402 21:26:57.291990 865 log.go:172] (0xc000b466e0) (0xc0006bd9a0) Stream added, broadcasting: 1\nI0402 21:26:57.295296 865 log.go:172] (0xc000b466e0) Reply frame received for 1\nI0402 21:26:57.295359 865 log.go:172] (0xc000b466e0) (0xc00021a000) Create stream\nI0402 21:26:57.295379 865 log.go:172] (0xc000b466e0) (0xc00021a000) Stream added, broadcasting: 3\nI0402 21:26:57.296579 865 log.go:172] (0xc000b466e0) Reply frame received for 3\nI0402 21:26:57.296604 865 log.go:172] (0xc000b466e0) (0xc0006bdc20) Create stream\nI0402 21:26:57.296613 865 log.go:172] (0xc000b466e0) (0xc0006bdc20) Stream added, broadcasting: 5\nI0402 21:26:57.298071 865 log.go:172] (0xc000b466e0) Reply frame received for 5\nI0402 21:26:57.403651 865 log.go:172] (0xc000b466e0) Data frame received for 5\nI0402 21:26:57.403702 865 log.go:172] (0xc000b466e0) Data frame received for 3\nI0402 21:26:57.403750 865 log.go:172] (0xc00021a000) (3) Data frame handling\nI0402 21:26:57.403769 865 log.go:172] (0xc00021a000) (3) Data frame sent\nI0402 21:26:57.403785 865 log.go:172] (0xc0006bdc20) (5) Data frame handling\nI0402 21:26:57.403839 865 log.go:172] (0xc0006bdc20) (5) Data frame sent\nI0402 21:26:57.403853 865 log.go:172] (0xc000b466e0) Data frame received for 5\nI0402 21:26:57.403861 865 log.go:172] (0xc0006bdc20) (5) Data frame handling\nI0402 21:26:57.403869 865 log.go:172] (0xc000b466e0) Data frame received for 3\nI0402 21:26:57.403875 865 log.go:172] (0xc00021a000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:26:57.405488 865 log.go:172] (0xc000b466e0) Data frame received for 1\nI0402 21:26:57.405513 865 log.go:172] (0xc0006bd9a0) (1) Data frame handling\nI0402 21:26:57.405540 865 log.go:172] (0xc0006bd9a0) (1) Data frame sent\nI0402 21:26:57.405559 865 log.go:172] (0xc000b466e0) (0xc0006bd9a0) Stream removed, broadcasting: 1\nI0402 21:26:57.405740 865 log.go:172] (0xc000b466e0) Go away received\nI0402 21:26:57.405884 865 log.go:172] (0xc000b466e0) (0xc0006bd9a0) Stream removed, broadcasting: 1\nI0402 21:26:57.405911 865 log.go:172] (0xc000b466e0) (0xc00021a000) Stream removed, broadcasting: 3\nI0402 21:26:57.405926 865 log.go:172] (0xc000b466e0) (0xc0006bdc20) Stream removed, broadcasting: 5\n" Apr 2 21:26:57.409: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:26:57.409: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:26:57.413: INFO: Found 1 stateful pods, waiting for 3 Apr 2 21:27:07.418: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:27:07.418: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:27:07.418: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 2 21:27:07.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:27:07.629: INFO: stderr: "I0402 21:27:07.557733 886 log.go:172] (0xc0009fb600) (0xc0009cc780) Create stream\nI0402 21:27:07.557790 886 log.go:172] (0xc0009fb600) (0xc0009cc780) Stream added, broadcasting: 1\nI0402 21:27:07.564061 886 log.go:172] (0xc0009fb600) Reply frame received for 1\nI0402 21:27:07.564151 886 log.go:172] (0xc0009fb600) (0xc0006bbb80) Create stream\nI0402 21:27:07.564180 886 log.go:172] (0xc0009fb600) (0xc0006bbb80) Stream added, broadcasting: 3\nI0402 21:27:07.565208 886 log.go:172] (0xc0009fb600) Reply frame received for 3\nI0402 21:27:07.565244 886 log.go:172] (0xc0009fb600) (0xc000654780) Create stream\nI0402 21:27:07.565254 886 log.go:172] (0xc0009fb600) (0xc000654780) Stream added, broadcasting: 5\nI0402 21:27:07.566148 886 log.go:172] (0xc0009fb600) Reply frame received for 5\nI0402 21:27:07.622997 886 log.go:172] (0xc0009fb600) Data frame received for 5\nI0402 21:27:07.623018 886 log.go:172] (0xc000654780) (5) Data frame handling\nI0402 21:27:07.623030 886 log.go:172] (0xc000654780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:27:07.623248 886 log.go:172] (0xc0009fb600) Data frame received for 3\nI0402 21:27:07.623259 886 log.go:172] (0xc0006bbb80) (3) Data frame handling\nI0402 21:27:07.623270 886 log.go:172] (0xc0006bbb80) (3) Data frame sent\nI0402 21:27:07.623575 886 log.go:172] (0xc0009fb600) Data frame received for 5\nI0402 21:27:07.623594 886 log.go:172] (0xc000654780) (5) Data frame handling\nI0402 21:27:07.623758 886 log.go:172] (0xc0009fb600) Data frame received for 3\nI0402 21:27:07.623782 886 log.go:172] (0xc0006bbb80) (3) Data frame handling\nI0402 21:27:07.625526 886 log.go:172] (0xc0009fb600) Data frame received for 1\nI0402 21:27:07.625546 886 log.go:172] (0xc0009cc780) (1) Data frame handling\nI0402 21:27:07.625565 886 log.go:172] (0xc0009cc780) (1) Data frame sent\nI0402 21:27:07.625581 886 log.go:172] (0xc0009fb600) (0xc0009cc780) Stream removed, broadcasting: 1\nI0402 21:27:07.625634 886 log.go:172] (0xc0009fb600) Go away received\nI0402 21:27:07.625912 886 log.go:172] (0xc0009fb600) (0xc0009cc780) Stream removed, broadcasting: 1\nI0402 21:27:07.625929 886 log.go:172] (0xc0009fb600) (0xc0006bbb80) Stream removed, broadcasting: 3\nI0402 21:27:07.625938 886 log.go:172] (0xc0009fb600) (0xc000654780) Stream removed, broadcasting: 5\n" Apr 2 21:27:07.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:27:07.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 21:27:07.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:27:07.899: INFO: stderr: "I0402 21:27:07.763099 906 log.go:172] (0xc0009be6e0) (0xc000619ea0) Create stream\nI0402 21:27:07.763183 906 log.go:172] (0xc0009be6e0) (0xc000619ea0) Stream added, broadcasting: 1\nI0402 21:27:07.765528 906 log.go:172] (0xc0009be6e0) Reply frame received for 1\nI0402 21:27:07.765556 906 log.go:172] (0xc0009be6e0) (0xc000576780) Create stream\nI0402 21:27:07.765564 906 log.go:172] (0xc0009be6e0) (0xc000576780) Stream added, broadcasting: 3\nI0402 21:27:07.766676 906 log.go:172] (0xc0009be6e0) Reply frame received for 3\nI0402 21:27:07.766730 906 log.go:172] (0xc0009be6e0) (0xc000619f40) Create stream\nI0402 21:27:07.766758 906 log.go:172] (0xc0009be6e0) (0xc000619f40) Stream added, broadcasting: 5\nI0402 21:27:07.767630 906 log.go:172] (0xc0009be6e0) Reply frame received for 5\nI0402 21:27:07.834783 906 log.go:172] (0xc0009be6e0) Data frame received for 5\nI0402 21:27:07.834817 906 log.go:172] (0xc000619f40) (5) Data frame handling\nI0402 21:27:07.834841 906 log.go:172] (0xc000619f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:27:07.892978 906 log.go:172] (0xc0009be6e0) Data frame received for 3\nI0402 21:27:07.893048 906 log.go:172] (0xc000576780) (3) Data frame handling\nI0402 21:27:07.893066 906 log.go:172] (0xc000576780) (3) Data frame sent\nI0402 21:27:07.893096 906 log.go:172] (0xc0009be6e0) Data frame received for 5\nI0402 21:27:07.893107 906 log.go:172] (0xc000619f40) (5) Data frame handling\nI0402 21:27:07.893353 906 log.go:172] (0xc0009be6e0) Data frame received for 3\nI0402 21:27:07.893380 906 log.go:172] (0xc000576780) (3) Data frame handling\nI0402 21:27:07.894936 906 log.go:172] (0xc0009be6e0) Data frame received for 1\nI0402 21:27:07.895004 906 log.go:172] (0xc000619ea0) (1) Data frame handling\nI0402 21:27:07.895027 906 log.go:172] (0xc000619ea0) (1) Data frame sent\nI0402 21:27:07.895044 906 log.go:172] (0xc0009be6e0) (0xc000619ea0) Stream removed, broadcasting: 1\nI0402 21:27:07.895143 906 log.go:172] (0xc0009be6e0) Go away received\nI0402 21:27:07.895362 906 log.go:172] (0xc0009be6e0) (0xc000619ea0) Stream removed, broadcasting: 1\nI0402 21:27:07.895377 906 log.go:172] (0xc0009be6e0) (0xc000576780) Stream removed, broadcasting: 3\nI0402 21:27:07.895383 906 log.go:172] (0xc0009be6e0) (0xc000619f40) Stream removed, broadcasting: 5\n" Apr 2 21:27:07.899: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:27:07.899: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 21:27:07.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:27:08.136: INFO: stderr: "I0402 21:27:08.030373 929 log.go:172] (0xc0005ba840) (0xc000703c20) Create stream\nI0402 21:27:08.030446 929 log.go:172] (0xc0005ba840) (0xc000703c20) Stream added, broadcasting: 1\nI0402 21:27:08.033450 929 log.go:172] (0xc0005ba840) Reply frame received for 1\nI0402 21:27:08.033509 929 log.go:172] (0xc0005ba840) (0xc000bc2000) Create stream\nI0402 21:27:08.033534 929 log.go:172] (0xc0005ba840) (0xc000bc2000) Stream added, broadcasting: 3\nI0402 21:27:08.034748 929 log.go:172] (0xc0005ba840) Reply frame received for 3\nI0402 21:27:08.034789 929 log.go:172] (0xc0005ba840) (0xc000703e00) Create stream\nI0402 21:27:08.034805 929 log.go:172] (0xc0005ba840) (0xc000703e00) Stream added, broadcasting: 5\nI0402 21:27:08.035965 929 log.go:172] (0xc0005ba840) Reply frame received for 5\nI0402 21:27:08.094583 929 log.go:172] (0xc0005ba840) Data frame received for 5\nI0402 21:27:08.094604 929 log.go:172] (0xc000703e00) (5) Data frame handling\nI0402 21:27:08.094614 929 log.go:172] (0xc000703e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:27:08.128690 929 log.go:172] (0xc0005ba840) Data frame received for 3\nI0402 21:27:08.128816 929 log.go:172] (0xc000bc2000) (3) Data frame handling\nI0402 21:27:08.128854 929 log.go:172] (0xc000bc2000) (3) Data frame sent\nI0402 21:27:08.128877 929 log.go:172] (0xc0005ba840) Data frame received for 3\nI0402 21:27:08.128919 929 log.go:172] (0xc000bc2000) (3) Data frame handling\nI0402 21:27:08.129417 929 log.go:172] (0xc0005ba840) Data frame received for 5\nI0402 21:27:08.129434 929 log.go:172] (0xc000703e00) (5) Data frame handling\nI0402 21:27:08.131623 929 log.go:172] (0xc0005ba840) Data frame received for 1\nI0402 21:27:08.131721 929 log.go:172] (0xc000703c20) (1) Data frame handling\nI0402 21:27:08.131762 929 log.go:172] (0xc000703c20) (1) Data frame sent\nI0402 21:27:08.131784 929 log.go:172] (0xc0005ba840) (0xc000703c20) Stream removed, broadcasting: 1\nI0402 21:27:08.131814 929 log.go:172] (0xc0005ba840) Go away received\nI0402 21:27:08.132090 929 log.go:172] (0xc0005ba840) (0xc000703c20) Stream removed, broadcasting: 1\nI0402 21:27:08.132104 929 log.go:172] (0xc0005ba840) (0xc000bc2000) Stream removed, broadcasting: 3\nI0402 21:27:08.132111 929 log.go:172] (0xc0005ba840) (0xc000703e00) Stream removed, broadcasting: 5\n" Apr 2 21:27:08.136: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:27:08.136: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 21:27:08.136: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:27:08.138: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 2 21:27:18.147: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 21:27:18.147: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 2 21:27:18.147: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 2 21:27:18.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999691s Apr 2 21:27:19.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990254003s Apr 2 21:27:20.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985420331s Apr 2 21:27:21.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98070178s Apr 2 21:27:22.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976074518s Apr 2 21:27:23.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971034484s Apr 2 21:27:24.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962094212s Apr 2 21:27:25.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957831332s Apr 2 21:27:26.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953701108s Apr 2 21:27:27.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.304524ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7084 Apr 2 21:27:28.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:27:28.430: INFO: stderr: "I0402 21:27:28.355045 950 log.go:172] (0xc0000f4b00) (0xc0006fdb80) Create stream\nI0402 21:27:28.355092 950 log.go:172] (0xc0000f4b00) (0xc0006fdb80) Stream added, broadcasting: 1\nI0402 21:27:28.357493 950 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0402 21:27:28.357551 950 log.go:172] (0xc0000f4b00) (0xc000906000) Create stream\nI0402 21:27:28.357575 950 log.go:172] (0xc0000f4b00) (0xc000906000) Stream added, broadcasting: 3\nI0402 21:27:28.358605 950 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0402 21:27:28.358639 950 log.go:172] (0xc0000f4b00) (0xc0006fdd60) Create stream\nI0402 21:27:28.358649 950 log.go:172] (0xc0000f4b00) (0xc0006fdd60) Stream added, broadcasting: 5\nI0402 21:27:28.359519 950 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0402 21:27:28.423016 950 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0402 21:27:28.423065 950 log.go:172] (0xc000906000) (3) Data frame handling\nI0402 21:27:28.423085 950 log.go:172] (0xc000906000) (3) Data frame sent\nI0402 21:27:28.423100 950 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0402 21:27:28.423117 950 log.go:172] (0xc000906000) (3) Data frame handling\nI0402 21:27:28.423138 950 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0402 21:27:28.423152 950 log.go:172] (0xc0006fdd60) (5) Data frame handling\nI0402 21:27:28.423159 950 log.go:172] (0xc0006fdd60) (5) Data frame sent\nI0402 21:27:28.423176 950 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0402 21:27:28.423187 950 log.go:172] (0xc0006fdd60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:27:28.424520 950 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0402 21:27:28.424543 950 log.go:172] (0xc0006fdb80) (1) Data frame handling\nI0402 21:27:28.424558 950 log.go:172] (0xc0006fdb80) (1) Data frame sent\nI0402 21:27:28.424577 950 log.go:172] (0xc0000f4b00) (0xc0006fdb80) Stream removed, broadcasting: 1\nI0402 21:27:28.424616 950 log.go:172] (0xc0000f4b00) Go away received\nI0402 21:27:28.424895 950 log.go:172] (0xc0000f4b00) (0xc0006fdb80) Stream removed, broadcasting: 1\nI0402 21:27:28.424911 950 log.go:172] (0xc0000f4b00) (0xc000906000) Stream removed, broadcasting: 3\nI0402 21:27:28.424923 950 log.go:172] (0xc0000f4b00) (0xc0006fdd60) Stream removed, broadcasting: 5\n" Apr 2 21:27:28.430: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:27:28.430: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:27:28.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:27:28.631: INFO: stderr: "I0402 21:27:28.550364 972 log.go:172] (0xc0009eea50) (0xc00059dd60) Create stream\nI0402 21:27:28.550429 972 log.go:172] (0xc0009eea50) (0xc00059dd60) Stream added, broadcasting: 1\nI0402 21:27:28.552916 972 log.go:172] (0xc0009eea50) Reply frame received for 1\nI0402 21:27:28.552964 972 log.go:172] (0xc0009eea50) (0xc00092c000) Create stream\nI0402 21:27:28.552979 972 log.go:172] (0xc0009eea50) (0xc00092c000) Stream added, broadcasting: 3\nI0402 21:27:28.553864 972 log.go:172] (0xc0009eea50) Reply frame received for 3\nI0402 21:27:28.553904 972 log.go:172] (0xc0009eea50) (0xc000402820) Create stream\nI0402 21:27:28.553915 972 log.go:172] (0xc0009eea50) (0xc000402820) Stream added, broadcasting: 5\nI0402 21:27:28.554519 972 log.go:172] (0xc0009eea50) Reply frame received for 5\nI0402 21:27:28.623873 972 log.go:172] (0xc0009eea50) Data frame received for 3\nI0402 21:27:28.623925 972 log.go:172] (0xc00092c000) (3) Data frame handling\nI0402 21:27:28.623940 972 log.go:172] (0xc00092c000) (3) Data frame sent\nI0402 21:27:28.623953 972 log.go:172] (0xc0009eea50) Data frame received for 3\nI0402 21:27:28.623971 972 log.go:172] (0xc00092c000) (3) Data frame handling\nI0402 21:27:28.623999 972 log.go:172] (0xc0009eea50) Data frame received for 5\nI0402 21:27:28.624018 972 log.go:172] (0xc000402820) (5) Data frame handling\nI0402 21:27:28.624055 972 log.go:172] (0xc000402820) (5) Data frame sent\nI0402 21:27:28.624071 972 log.go:172] (0xc0009eea50) Data frame received for 5\nI0402 21:27:28.624085 972 log.go:172] (0xc000402820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:27:28.625747 972 log.go:172] (0xc0009eea50) Data frame received for 1\nI0402 21:27:28.625775 972 log.go:172] (0xc00059dd60) (1) Data frame handling\nI0402 21:27:28.625788 972 log.go:172] (0xc00059dd60) (1) Data frame sent\nI0402 21:27:28.625816 972 log.go:172] (0xc0009eea50) (0xc00059dd60) Stream removed, broadcasting: 1\nI0402 21:27:28.626244 972 log.go:172] (0xc0009eea50) (0xc00059dd60) Stream removed, broadcasting: 1\nI0402 21:27:28.626270 972 log.go:172] (0xc0009eea50) (0xc00092c000) Stream removed, broadcasting: 3\nI0402 21:27:28.626284 972 log.go:172] (0xc0009eea50) (0xc000402820) Stream removed, broadcasting: 5\n" Apr 2 21:27:28.631: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:27:28.631: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:27:28.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7084 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:27:28.860: INFO: stderr: "I0402 21:27:28.771556 995 log.go:172] (0xc000b1e630) (0xc0006db9a0) Create stream\nI0402 21:27:28.771627 995 log.go:172] (0xc000b1e630) (0xc0006db9a0) Stream added, broadcasting: 1\nI0402 21:27:28.777820 995 log.go:172] (0xc000b1e630) Reply frame received for 1\nI0402 21:27:28.777875 995 log.go:172] (0xc000b1e630) (0xc000a2a000) Create stream\nI0402 21:27:28.777900 995 log.go:172] (0xc000b1e630) (0xc000a2a000) Stream added, broadcasting: 3\nI0402 21:27:28.784517 995 log.go:172] (0xc000b1e630) Reply frame received for 3\nI0402 21:27:28.784556 995 log.go:172] (0xc000b1e630) (0xc000a2a0a0) Create stream\nI0402 21:27:28.784574 995 log.go:172] (0xc000b1e630) (0xc000a2a0a0) Stream added, broadcasting: 5\nI0402 21:27:28.791085 995 log.go:172] (0xc000b1e630) Reply frame received for 5\nI0402 21:27:28.853789 995 log.go:172] (0xc000b1e630) Data frame received for 3\nI0402 21:27:28.853839 995 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0402 21:27:28.853859 995 log.go:172] (0xc000a2a000) (3) Data frame sent\nI0402 21:27:28.853883 995 log.go:172] (0xc000b1e630) Data frame received for 5\nI0402 21:27:28.853890 995 log.go:172] (0xc000a2a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:27:28.853902 995 log.go:172] (0xc000b1e630) Data frame received for 3\nI0402 21:27:28.853924 995 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0402 21:27:28.853942 995 log.go:172] (0xc000a2a0a0) (5) Data frame sent\nI0402 21:27:28.853950 995 log.go:172] (0xc000b1e630) Data frame received for 5\nI0402 21:27:28.853957 995 log.go:172] (0xc000a2a0a0) (5) Data frame handling\nI0402 21:27:28.855232 995 log.go:172] (0xc000b1e630) Data frame received for 1\nI0402 21:27:28.855277 995 log.go:172] (0xc0006db9a0) (1) Data frame handling\nI0402 21:27:28.855303 995 log.go:172] (0xc0006db9a0) (1) Data frame sent\nI0402 21:27:28.855312 995 log.go:172] (0xc000b1e630) (0xc0006db9a0) Stream removed, broadcasting: 1\nI0402 21:27:28.855325 995 log.go:172] (0xc000b1e630) Go away received\nI0402 21:27:28.855761 995 log.go:172] (0xc000b1e630) (0xc0006db9a0) Stream removed, broadcasting: 1\nI0402 21:27:28.855804 995 log.go:172] (0xc000b1e630) (0xc000a2a000) Stream removed, broadcasting: 3\nI0402 21:27:28.855823 995 log.go:172] (0xc000b1e630) (0xc000a2a0a0) Stream removed, broadcasting: 5\n" Apr 2 21:27:28.860: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:27:28.860: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:27:28.860: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 21:27:38.915: INFO: Deleting all statefulset in ns statefulset-7084 Apr 2 21:27:38.936: INFO: Scaling statefulset ss to 0 Apr 2 21:27:38.945: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:27:38.948: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:27:38.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7084" for this suite. • [SLOW TEST:72.299 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":58,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:27:38.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:27:39.008: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 2 21:27:41.049: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:27:42.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6092" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":59,"skipped":763,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:27:42.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0402 21:27:53.703028 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 21:27:53.703: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:27:53.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-949" for this suite. • [SLOW TEST:11.636 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":60,"skipped":769,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:27:53.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Apr 2 21:27:53.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5572' Apr 2 21:27:54.053: INFO: stderr: "" Apr 2 21:27:54.053: INFO: stdout: "pod/pause created\n" Apr 2 21:27:54.053: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 2 21:27:54.053: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5572" to be "running and ready" Apr 2 21:27:54.057: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632618ms Apr 2 21:27:56.069: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015622726s Apr 2 21:27:58.072: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.018705899s Apr 2 21:27:58.072: INFO: Pod "pause" satisfied condition "running and ready" Apr 2 21:27:58.072: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 2 21:27:58.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5572' Apr 2 21:27:58.190: INFO: stderr: "" Apr 2 21:27:58.191: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 2 21:27:58.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5572' Apr 2 21:27:58.296: INFO: stderr: "" Apr 2 21:27:58.296: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 2 21:27:58.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5572' Apr 2 21:27:58.402: INFO: stderr: "" Apr 2 21:27:58.402: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 2 21:27:58.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5572' Apr 2 21:27:58.489: INFO: stderr: "" Apr 2 21:27:58.489: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Apr 2 21:27:58.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5572' Apr 2 21:27:58.599: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 21:27:58.599: INFO: stdout: "pod \"pause\" force deleted\n" Apr 2 21:27:58.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5572' Apr 2 21:27:58.698: INFO: stderr: "No resources found in kubectl-5572 namespace.\n" Apr 2 21:27:58.698: INFO: stdout: "" Apr 2 21:27:58.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5572 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 21:27:58.965: INFO: stderr: "" Apr 2 21:27:58.965: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:27:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5572" for this suite. • [SLOW TEST:5.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":61,"skipped":789,"failed":0} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:27:59.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 2 21:28:04.177: INFO: Pod pod-hostip-c9c61ccd-f612-4083-a863-9f5687aed1a1 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:04.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1202" for this suite. • [SLOW TEST:5.124 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":789,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:04.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 2 21:28:04.243: INFO: Waiting up to 5m0s for pod "pod-d8a4cd1f-c74f-4c00-a610-943abb147abd" in namespace "emptydir-8507" to be "success or failure" Apr 2 21:28:04.290: INFO: Pod "pod-d8a4cd1f-c74f-4c00-a610-943abb147abd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.543489ms Apr 2 21:28:06.293: INFO: Pod "pod-d8a4cd1f-c74f-4c00-a610-943abb147abd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049746409s Apr 2 21:28:08.297: INFO: Pod "pod-d8a4cd1f-c74f-4c00-a610-943abb147abd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053818965s STEP: Saw pod success Apr 2 21:28:08.297: INFO: Pod "pod-d8a4cd1f-c74f-4c00-a610-943abb147abd" satisfied condition "success or failure" Apr 2 21:28:08.300: INFO: Trying to get logs from node jerma-worker pod pod-d8a4cd1f-c74f-4c00-a610-943abb147abd container test-container: STEP: delete the pod Apr 2 21:28:08.336: INFO: Waiting for pod pod-d8a4cd1f-c74f-4c00-a610-943abb147abd to disappear Apr 2 21:28:08.338: INFO: Pod pod-d8a4cd1f-c74f-4c00-a610-943abb147abd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:08.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8507" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":789,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:08.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 2 21:28:12.976: INFO: Successfully updated pod "adopt-release-d5tnp" STEP: Checking that the Job readopts the Pod Apr 2 21:28:12.976: INFO: Waiting up to 15m0s for pod "adopt-release-d5tnp" in namespace "job-1250" to be "adopted" Apr 2 21:28:12.980: INFO: Pod "adopt-release-d5tnp": Phase="Running", Reason="", readiness=true. Elapsed: 3.936148ms Apr 2 21:28:14.984: INFO: Pod "adopt-release-d5tnp": Phase="Running", Reason="", readiness=true. Elapsed: 2.007987588s Apr 2 21:28:14.984: INFO: Pod "adopt-release-d5tnp" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 2 21:28:15.492: INFO: Successfully updated pod "adopt-release-d5tnp" STEP: Checking that the Job releases the Pod Apr 2 21:28:15.492: INFO: Waiting up to 15m0s for pod "adopt-release-d5tnp" in namespace "job-1250" to be "released" Apr 2 21:28:15.507: INFO: Pod "adopt-release-d5tnp": Phase="Running", Reason="", readiness=true. Elapsed: 14.937887ms Apr 2 21:28:17.511: INFO: Pod "adopt-release-d5tnp": Phase="Running", Reason="", readiness=true. Elapsed: 2.019171996s Apr 2 21:28:17.511: INFO: Pod "adopt-release-d5tnp" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:17.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1250" for this suite. • [SLOW TEST:9.175 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":64,"skipped":794,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:17.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 2 21:28:22.771: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:22.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-508" for this suite. • [SLOW TEST:5.339 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":65,"skipped":808,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:22.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4912.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4912.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:28:29.101: INFO: DNS probes using dns-4912/dns-test-c8e10c59-0a34-4b1c-8789-bb0511981bce succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:29.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4912" for this suite. • [SLOW TEST:6.467 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":66,"skipped":819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:29.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 2 21:28:34.135: INFO: Successfully updated pod "pod-update-activedeadlineseconds-57948c28-8bd9-4e64-b793-abe323abfc95" Apr 2 21:28:34.135: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-57948c28-8bd9-4e64-b793-abe323abfc95" in namespace "pods-1427" to be "terminated due to deadline exceeded" Apr 2 21:28:34.138: INFO: Pod "pod-update-activedeadlineseconds-57948c28-8bd9-4e64-b793-abe323abfc95": Phase="Running", Reason="", readiness=true. Elapsed: 2.731037ms Apr 2 21:28:36.141: INFO: Pod "pod-update-activedeadlineseconds-57948c28-8bd9-4e64-b793-abe323abfc95": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006461721s Apr 2 21:28:36.141: INFO: Pod "pod-update-activedeadlineseconds-57948c28-8bd9-4e64-b793-abe323abfc95" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:36.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1427" for this suite. • [SLOW TEST:6.820 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":871,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:36.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:28:36.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0" in namespace "projected-6793" to be "success or failure" Apr 2 21:28:36.259: INFO: Pod "downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0": Phase="Pending", Reason="", readiness=false. Elapsed: 37.590285ms Apr 2 21:28:38.264: INFO: Pod "downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04238315s Apr 2 21:28:40.269: INFO: Pod "downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047232112s STEP: Saw pod success Apr 2 21:28:40.269: INFO: Pod "downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0" satisfied condition "success or failure" Apr 2 21:28:40.272: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0 container client-container: STEP: delete the pod Apr 2 21:28:40.303: INFO: Waiting for pod downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0 to disappear Apr 2 21:28:40.315: INFO: Pod downwardapi-volume-0a5a8b54-dc88-4d33-96f3-03637416f2e0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:40.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6793" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":879,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:40.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:28:51.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9985" for this suite. • [SLOW TEST:11.279 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":69,"skipped":884,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:28:51.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 2 21:28:51.806: INFO: namespace kubectl-3685 Apr 2 21:28:51.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3685' Apr 2 21:28:55.916: INFO: stderr: "" Apr 2 21:28:55.916: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 21:28:56.919: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:28:56.920: INFO: Found 0 / 1 Apr 2 21:28:57.921: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:28:57.921: INFO: Found 0 / 1 Apr 2 21:28:58.921: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:28:58.921: INFO: Found 0 / 1 Apr 2 21:28:59.921: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:28:59.921: INFO: Found 1 / 1 Apr 2 21:28:59.921: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 2 21:28:59.924: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:28:59.924: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 21:28:59.924: INFO: wait on agnhost-master startup in kubectl-3685 Apr 2 21:28:59.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-gcvj4 agnhost-master --namespace=kubectl-3685' Apr 2 21:29:00.056: INFO: stderr: "" Apr 2 21:29:00.056: INFO: stdout: "Paused\n" STEP: exposing RC Apr 2 21:29:00.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3685' Apr 2 21:29:00.190: INFO: stderr: "" Apr 2 21:29:00.190: INFO: stdout: "service/rm2 exposed\n" Apr 2 21:29:00.237: INFO: Service rm2 in namespace kubectl-3685 found. STEP: exposing service Apr 2 21:29:02.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3685' Apr 2 21:29:02.390: INFO: stderr: "" Apr 2 21:29:02.390: INFO: stdout: "service/rm3 exposed\n" Apr 2 21:29:02.395: INFO: Service rm3 in namespace kubectl-3685 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:04.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3685" for this suite. • [SLOW TEST:12.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":70,"skipped":887,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:04.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 2 21:29:04.462: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 21:29:04.481: INFO: Waiting for terminating namespaces to be deleted... Apr 2 21:29:04.484: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 2 21:29:04.490: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.490: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:29:04.490: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.490: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:29:04.490: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 2 21:29:04.496: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.497: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:29:04.497: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.497: INFO: Container kube-bench ready: false, restart count 0 Apr 2 21:29:04.497: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.497: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:29:04.497: INFO: agnhost-master-gcvj4 from kubectl-3685 started at 2020-04-02 21:28:55 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.497: INFO: Container agnhost-master ready: true, restart count 0 Apr 2 21:29:04.497: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 2 21:29:04.497: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16021df21fcd6105], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:05.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-551" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":71,"skipped":900,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:05.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 2 21:29:05.595: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 2 21:29:16.035: INFO: >>> kubeConfig: /root/.kube/config Apr 2 21:29:17.894: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:28.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9935" for this suite. • [SLOW TEST:22.729 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":72,"skipped":904,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:28.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 2 21:29:36.384: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:36.404: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:38.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:38.409: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:40.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:40.408: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:42.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:42.409: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:44.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:44.409: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:46.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:46.409: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:48.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:48.409: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 21:29:50.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 21:29:50.409: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:50.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5936" for this suite. • [SLOW TEST:22.135 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:50.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:50.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-939" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":74,"skipped":943,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:50.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-51dc01e8-7a78-4eb0-8525-c2fcb5cb162f STEP: Creating a pod to test consume configMaps Apr 2 21:29:50.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c" in namespace "projected-5808" to be "success or failure" Apr 2 21:29:50.611: INFO: Pod "pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541094ms Apr 2 21:29:52.615: INFO: Pod "pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006811284s Apr 2 21:29:54.619: INFO: Pod "pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011200349s STEP: Saw pod success Apr 2 21:29:54.619: INFO: Pod "pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c" satisfied condition "success or failure" Apr 2 21:29:54.622: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c container projected-configmap-volume-test: STEP: delete the pod Apr 2 21:29:54.643: INFO: Waiting for pod pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c to disappear Apr 2 21:29:54.654: INFO: Pod pod-projected-configmaps-6d523377-d65f-4594-878f-abe38c64632c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:29:54.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5808" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":954,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:29:54.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0402 21:30:04.764051 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 21:30:04.764: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:04.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8140" for this suite. • [SLOW TEST:10.111 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":76,"skipped":967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:04.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 2 21:30:04.841: INFO: Waiting up to 5m0s for pod "pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2" in namespace "emptydir-8549" to be "success or failure" Apr 2 21:30:04.855: INFO: Pod "pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.054373ms Apr 2 21:30:06.860: INFO: Pod "pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018143937s Apr 2 21:30:08.867: INFO: Pod "pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025258356s STEP: Saw pod success Apr 2 21:30:08.867: INFO: Pod "pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2" satisfied condition "success or failure" Apr 2 21:30:08.870: INFO: Trying to get logs from node jerma-worker pod pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2 container test-container: STEP: delete the pod Apr 2 21:30:08.911: INFO: Waiting for pod pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2 to disappear Apr 2 21:30:08.930: INFO: Pod pod-e470808f-940c-468c-9bc7-ae83ccd0f5a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:08.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8549" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":990,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:08.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-qhqs STEP: Creating a pod to test atomic-volume-subpath Apr 2 21:30:09.021: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qhqs" in namespace "subpath-5288" to be "success or failure" Apr 2 21:30:09.025: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839354ms Apr 2 21:30:11.030: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008250551s Apr 2 21:30:13.052: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 4.030793629s Apr 2 21:30:15.056: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 6.034392248s Apr 2 21:30:17.059: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 8.037908628s Apr 2 21:30:19.062: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 10.041034334s Apr 2 21:30:21.067: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 12.045091754s Apr 2 21:30:23.071: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 14.049410857s Apr 2 21:30:25.075: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 16.053616037s Apr 2 21:30:27.079: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 18.057509006s Apr 2 21:30:29.082: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 20.061005536s Apr 2 21:30:31.087: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Running", Reason="", readiness=true. Elapsed: 22.06532278s Apr 2 21:30:33.091: INFO: Pod "pod-subpath-test-configmap-qhqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069593131s STEP: Saw pod success Apr 2 21:30:33.091: INFO: Pod "pod-subpath-test-configmap-qhqs" satisfied condition "success or failure" Apr 2 21:30:33.095: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-qhqs container test-container-subpath-configmap-qhqs: STEP: delete the pod Apr 2 21:30:33.162: INFO: Waiting for pod pod-subpath-test-configmap-qhqs to disappear Apr 2 21:30:33.188: INFO: Pod pod-subpath-test-configmap-qhqs no longer exists STEP: Deleting pod pod-subpath-test-configmap-qhqs Apr 2 21:30:33.188: INFO: Deleting pod "pod-subpath-test-configmap-qhqs" in namespace "subpath-5288" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5288" for this suite. • [SLOW TEST:24.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":78,"skipped":1024,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:33.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-43dc3d81-7e4e-483e-b917-f175edf252f3 STEP: Creating a pod to test consume secrets Apr 2 21:30:33.282: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed" in namespace "projected-8566" to be "success or failure" Apr 2 21:30:33.318: INFO: Pod "pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 35.547376ms Apr 2 21:30:35.329: INFO: Pod "pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046753603s Apr 2 21:30:37.333: INFO: Pod "pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050816818s STEP: Saw pod success Apr 2 21:30:37.333: INFO: Pod "pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed" satisfied condition "success or failure" Apr 2 21:30:37.343: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed container projected-secret-volume-test: STEP: delete the pod Apr 2 21:30:37.362: INFO: Waiting for pod pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed to disappear Apr 2 21:30:37.367: INFO: Pod pod-projected-secrets-8ec0d9c5-38e3-41dd-a6d2-a1d151d0b1ed no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:37.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8566" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:37.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 2 21:30:37.453: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 2 21:30:46.526: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:46.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8659" for this suite. • [SLOW TEST:9.165 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:46.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:30:46.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7603" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":81,"skipped":1111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:30:46.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 2 21:30:50.914: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 2 21:31:01.014: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:31:01.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3922" for this suite. • [SLOW TEST:14.314 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":82,"skipped":1134,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:31:01.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 2 21:31:01.109: INFO: Created pod &Pod{ObjectMeta:{dns-115 dns-115 /api/v1/namespaces/dns-115/pods/dns-115 66f60a83-6497-41e5-92c3-2ce75e309e7a 4850998 0 2020-04-02 21:31:01 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qtfpb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qtfpb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qtfpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 2 21:31:05.120: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-115 PodName:dns-115 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:31:05.120: INFO: >>> kubeConfig: /root/.kube/config I0402 21:31:05.158312 6 log.go:172] (0xc003e75b80) (0xc0022a7680) Create stream I0402 21:31:05.158340 6 log.go:172] (0xc003e75b80) (0xc0022a7680) Stream added, broadcasting: 1 I0402 21:31:05.159944 6 log.go:172] (0xc003e75b80) Reply frame received for 1 I0402 21:31:05.159974 6 log.go:172] (0xc003e75b80) (0xc002328000) Create stream I0402 21:31:05.159984 6 log.go:172] (0xc003e75b80) (0xc002328000) Stream added, broadcasting: 3 I0402 21:31:05.160826 6 log.go:172] (0xc003e75b80) Reply frame received for 3 I0402 21:31:05.160856 6 log.go:172] (0xc003e75b80) (0xc002238a00) Create stream I0402 21:31:05.160872 6 log.go:172] (0xc003e75b80) (0xc002238a00) Stream added, broadcasting: 5 I0402 21:31:05.161868 6 log.go:172] (0xc003e75b80) Reply frame received for 5 I0402 21:31:05.217495 6 log.go:172] (0xc003e75b80) Data frame received for 3 I0402 21:31:05.217526 6 log.go:172] (0xc002328000) (3) Data frame handling I0402 21:31:05.217544 6 log.go:172] (0xc002328000) (3) Data frame sent I0402 21:31:05.218836 6 log.go:172] (0xc003e75b80) Data frame received for 3 I0402 21:31:05.218858 6 log.go:172] (0xc002328000) (3) Data frame handling I0402 21:31:05.218881 6 log.go:172] (0xc003e75b80) Data frame received for 5 I0402 21:31:05.218892 6 log.go:172] (0xc002238a00) (5) Data frame handling I0402 21:31:05.220443 6 log.go:172] (0xc003e75b80) Data frame received for 1 I0402 21:31:05.220458 6 log.go:172] (0xc0022a7680) (1) Data frame handling I0402 21:31:05.220479 6 log.go:172] (0xc0022a7680) (1) Data frame sent I0402 21:31:05.220500 6 log.go:172] (0xc003e75b80) (0xc0022a7680) Stream removed, broadcasting: 1 I0402 21:31:05.220547 6 log.go:172] (0xc003e75b80) Go away received I0402 21:31:05.220617 6 log.go:172] (0xc003e75b80) (0xc0022a7680) Stream removed, broadcasting: 1 I0402 21:31:05.220628 6 log.go:172] (0xc003e75b80) (0xc002328000) Stream removed, broadcasting: 3 I0402 21:31:05.220633 6 log.go:172] (0xc003e75b80) (0xc002238a00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 2 21:31:05.220: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-115 PodName:dns-115 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:31:05.220: INFO: >>> kubeConfig: /root/.kube/config I0402 21:31:05.241654 6 log.go:172] (0xc005c82370) (0xc002328280) Create stream I0402 21:31:05.241675 6 log.go:172] (0xc005c82370) (0xc002328280) Stream added, broadcasting: 1 I0402 21:31:05.243366 6 log.go:172] (0xc005c82370) Reply frame received for 1 I0402 21:31:05.243397 6 log.go:172] (0xc005c82370) (0xc000ef88c0) Create stream I0402 21:31:05.243413 6 log.go:172] (0xc005c82370) (0xc000ef88c0) Stream added, broadcasting: 3 I0402 21:31:05.244183 6 log.go:172] (0xc005c82370) Reply frame received for 3 I0402 21:31:05.244221 6 log.go:172] (0xc005c82370) (0xc000ef8be0) Create stream I0402 21:31:05.244233 6 log.go:172] (0xc005c82370) (0xc000ef8be0) Stream added, broadcasting: 5 I0402 21:31:05.245047 6 log.go:172] (0xc005c82370) Reply frame received for 5 I0402 21:31:05.312265 6 log.go:172] (0xc005c82370) Data frame received for 3 I0402 21:31:05.312314 6 log.go:172] (0xc000ef88c0) (3) Data frame handling I0402 21:31:05.312347 6 log.go:172] (0xc000ef88c0) (3) Data frame sent I0402 21:31:05.313330 6 log.go:172] (0xc005c82370) Data frame received for 5 I0402 21:31:05.313359 6 log.go:172] (0xc000ef8be0) (5) Data frame handling I0402 21:31:05.313644 6 log.go:172] (0xc005c82370) Data frame received for 3 I0402 21:31:05.313669 6 log.go:172] (0xc000ef88c0) (3) Data frame handling I0402 21:31:05.315363 6 log.go:172] (0xc005c82370) Data frame received for 1 I0402 21:31:05.315389 6 log.go:172] (0xc002328280) (1) Data frame handling I0402 21:31:05.315425 6 log.go:172] (0xc002328280) (1) Data frame sent I0402 21:31:05.315452 6 log.go:172] (0xc005c82370) (0xc002328280) Stream removed, broadcasting: 1 I0402 21:31:05.315472 6 log.go:172] (0xc005c82370) Go away received I0402 21:31:05.315663 6 log.go:172] (0xc005c82370) (0xc002328280) Stream removed, broadcasting: 1 I0402 21:31:05.315687 6 log.go:172] (0xc005c82370) (0xc000ef88c0) Stream removed, broadcasting: 3 I0402 21:31:05.315696 6 log.go:172] (0xc005c82370) (0xc000ef8be0) Stream removed, broadcasting: 5 Apr 2 21:31:05.315: INFO: Deleting pod dns-115... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:31:05.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-115" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":83,"skipped":1136,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:31:05.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 2 21:31:05.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:05.706: INFO: Number of nodes with available pods: 0 Apr 2 21:31:05.706: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:31:06.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:06.715: INFO: Number of nodes with available pods: 0 Apr 2 21:31:06.715: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:31:07.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:07.713: INFO: Number of nodes with available pods: 0 Apr 2 21:31:07.713: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:31:08.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:08.714: INFO: Number of nodes with available pods: 0 Apr 2 21:31:08.714: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:31:09.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:09.715: INFO: Number of nodes with available pods: 2 Apr 2 21:31:09.715: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 2 21:31:09.734: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:09.737: INFO: Number of nodes with available pods: 1 Apr 2 21:31:09.737: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:10.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:10.744: INFO: Number of nodes with available pods: 1 Apr 2 21:31:10.744: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:11.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:11.746: INFO: Number of nodes with available pods: 1 Apr 2 21:31:11.746: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:12.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:12.745: INFO: Number of nodes with available pods: 1 Apr 2 21:31:12.745: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:13.755: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:13.759: INFO: Number of nodes with available pods: 1 Apr 2 21:31:13.759: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:14.741: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:14.744: INFO: Number of nodes with available pods: 1 Apr 2 21:31:14.744: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:15.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:15.746: INFO: Number of nodes with available pods: 1 Apr 2 21:31:15.746: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:16.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:16.746: INFO: Number of nodes with available pods: 1 Apr 2 21:31:16.746: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:17.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:17.746: INFO: Number of nodes with available pods: 1 Apr 2 21:31:17.746: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:18.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:18.745: INFO: Number of nodes with available pods: 1 Apr 2 21:31:18.745: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:19.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:19.745: INFO: Number of nodes with available pods: 1 Apr 2 21:31:19.745: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:20.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:20.746: INFO: Number of nodes with available pods: 1 Apr 2 21:31:20.746: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:21.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:21.745: INFO: Number of nodes with available pods: 1 Apr 2 21:31:21.745: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:31:22.741: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:31:22.745: INFO: Number of nodes with available pods: 2 Apr 2 21:31:22.745: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5694, will wait for the garbage collector to delete the pods Apr 2 21:31:22.807: INFO: Deleting DaemonSet.extensions daemon-set took: 6.138202ms Apr 2 21:31:23.107: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.388951ms Apr 2 21:31:29.510: INFO: Number of nodes with available pods: 0 Apr 2 21:31:29.511: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 21:31:29.513: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5694/daemonsets","resourceVersion":"4851167"},"items":null} Apr 2 21:31:29.516: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5694/pods","resourceVersion":"4851167"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:31:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5694" for this suite. • [SLOW TEST:24.191 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":84,"skipped":1154,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:31:29.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-354f9eec-633e-4500-96cd-e9a688ce864a STEP: Creating a pod to test consume configMaps Apr 2 21:31:29.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013" in namespace "configmap-2823" to be "success or failure" Apr 2 21:31:29.662: INFO: Pod "pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013": Phase="Pending", Reason="", readiness=false. Elapsed: 34.399995ms Apr 2 21:31:31.707: INFO: Pod "pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079396628s Apr 2 21:31:33.711: INFO: Pod "pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08330985s STEP: Saw pod success Apr 2 21:31:33.711: INFO: Pod "pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013" satisfied condition "success or failure" Apr 2 21:31:33.713: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013 container configmap-volume-test: STEP: delete the pod Apr 2 21:31:33.772: INFO: Waiting for pod pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013 to disappear Apr 2 21:31:33.782: INFO: Pod pod-configmaps-293dba3a-e907-447f-b3d4-1f7290757013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:31:33.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2823" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1161,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:31:33.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2806 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2806 STEP: creating replication controller externalsvc in namespace services-2806 I0402 21:31:33.980759 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2806, replica count: 2 I0402 21:31:37.031171 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:31:40.031324 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 2 21:31:40.067: INFO: Creating new exec pod Apr 2 21:31:44.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2806 execpod5ghsh -- /bin/sh -x -c nslookup clusterip-service' Apr 2 21:31:44.391: INFO: stderr: "I0402 21:31:44.287749 1300 log.go:172] (0xc000afe0b0) (0xc0007af540) Create stream\nI0402 21:31:44.287803 1300 log.go:172] (0xc000afe0b0) (0xc0007af540) Stream added, broadcasting: 1\nI0402 21:31:44.290624 1300 log.go:172] (0xc000afe0b0) Reply frame received for 1\nI0402 21:31:44.290676 1300 log.go:172] (0xc000afe0b0) (0xc000924000) Create stream\nI0402 21:31:44.290690 1300 log.go:172] (0xc000afe0b0) (0xc000924000) Stream added, broadcasting: 3\nI0402 21:31:44.291792 1300 log.go:172] (0xc000afe0b0) Reply frame received for 3\nI0402 21:31:44.291829 1300 log.go:172] (0xc000afe0b0) (0xc000960000) Create stream\nI0402 21:31:44.291848 1300 log.go:172] (0xc000afe0b0) (0xc000960000) Stream added, broadcasting: 5\nI0402 21:31:44.292919 1300 log.go:172] (0xc000afe0b0) Reply frame received for 5\nI0402 21:31:44.374213 1300 log.go:172] (0xc000afe0b0) Data frame received for 5\nI0402 21:31:44.374243 1300 log.go:172] (0xc000960000) (5) Data frame handling\nI0402 21:31:44.374265 1300 log.go:172] (0xc000960000) (5) Data frame sent\n+ nslookup clusterip-service\nI0402 21:31:44.382663 1300 log.go:172] (0xc000afe0b0) Data frame received for 3\nI0402 21:31:44.382692 1300 log.go:172] (0xc000924000) (3) Data frame handling\nI0402 21:31:44.382708 1300 log.go:172] (0xc000924000) (3) Data frame sent\nI0402 21:31:44.383969 1300 log.go:172] (0xc000afe0b0) Data frame received for 3\nI0402 21:31:44.383992 1300 log.go:172] (0xc000924000) (3) Data frame handling\nI0402 21:31:44.384010 1300 log.go:172] (0xc000924000) (3) Data frame sent\nI0402 21:31:44.384557 1300 log.go:172] (0xc000afe0b0) Data frame received for 5\nI0402 21:31:44.384574 1300 log.go:172] (0xc000960000) (5) Data frame handling\nI0402 21:31:44.384607 1300 log.go:172] (0xc000afe0b0) Data frame received for 3\nI0402 21:31:44.384624 1300 log.go:172] (0xc000924000) (3) Data frame handling\nI0402 21:31:44.386486 1300 log.go:172] (0xc000afe0b0) Data frame received for 1\nI0402 21:31:44.386518 1300 log.go:172] (0xc0007af540) (1) Data frame handling\nI0402 21:31:44.386561 1300 log.go:172] (0xc0007af540) (1) Data frame sent\nI0402 21:31:44.386769 1300 log.go:172] (0xc000afe0b0) (0xc0007af540) Stream removed, broadcasting: 1\nI0402 21:31:44.386841 1300 log.go:172] (0xc000afe0b0) Go away received\nI0402 21:31:44.387066 1300 log.go:172] (0xc000afe0b0) (0xc0007af540) Stream removed, broadcasting: 1\nI0402 21:31:44.387083 1300 log.go:172] (0xc000afe0b0) (0xc000924000) Stream removed, broadcasting: 3\nI0402 21:31:44.387091 1300 log.go:172] (0xc000afe0b0) (0xc000960000) Stream removed, broadcasting: 5\n" Apr 2 21:31:44.391: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2806.svc.cluster.local\tcanonical name = externalsvc.services-2806.svc.cluster.local.\nName:\texternalsvc.services-2806.svc.cluster.local\nAddress: 10.103.255.174\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2806, will wait for the garbage collector to delete the pods Apr 2 21:31:44.451: INFO: Deleting ReplicationController externalsvc took: 6.722451ms Apr 2 21:31:44.752: INFO: Terminating ReplicationController externalsvc pods took: 300.293676ms Apr 2 21:31:59.591: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:31:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2806" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.832 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":86,"skipped":1162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:31:59.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:31:59.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46" in namespace "projected-9766" to be "success or failure" Apr 2 21:31:59.747: INFO: Pod "downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46": Phase="Pending", Reason="", readiness=false. Elapsed: 12.445649ms Apr 2 21:32:01.751: INFO: Pod "downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0158233s Apr 2 21:32:03.755: INFO: Pod "downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019600694s STEP: Saw pod success Apr 2 21:32:03.755: INFO: Pod "downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46" satisfied condition "success or failure" Apr 2 21:32:03.758: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46 container client-container: STEP: delete the pod Apr 2 21:32:03.778: INFO: Waiting for pod downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46 to disappear Apr 2 21:32:03.782: INFO: Pod downwardapi-volume-abe85f9a-2602-4a86-bd39-3dd186115e46 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:03.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9766" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1209,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:03.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 2 21:32:11.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 21:32:12.002: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 21:32:14.002: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 21:32:14.006: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 21:32:16.002: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 21:32:16.006: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 21:32:18.002: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 21:32:18.006: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 21:32:20.002: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 21:32:20.006: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:20.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1760" for this suite. • [SLOW TEST:16.230 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1217,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:20.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4492.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4492.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4492.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4492.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:32:26.180: INFO: DNS probes using dns-4492/dns-test-b689e3f3-d502-4540-86a9-41dfdf37bc72 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4492" for this suite. • [SLOW TEST:6.327 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":89,"skipped":1228,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:26.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0402 21:32:27.833620 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 21:32:27.833: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:27.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5299" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":90,"skipped":1228,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:27.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 2 21:32:28.152: INFO: Waiting up to 5m0s for pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc" in namespace "var-expansion-6652" to be "success or failure" Apr 2 21:32:28.160: INFO: Pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.480817ms Apr 2 21:32:30.228: INFO: Pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076414781s Apr 2 21:32:32.232: INFO: Pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc": Phase="Running", Reason="", readiness=true. Elapsed: 4.079878684s Apr 2 21:32:34.235: INFO: Pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083151508s STEP: Saw pod success Apr 2 21:32:34.235: INFO: Pod "var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc" satisfied condition "success or failure" Apr 2 21:32:34.237: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc container dapi-container: STEP: delete the pod Apr 2 21:32:34.252: INFO: Waiting for pod var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc to disappear Apr 2 21:32:34.269: INFO: Pod var-expansion-8dfefa55-496a-429f-af80-bb8ea18014cc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:34.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6652" for this suite. • [SLOW TEST:6.478 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1229,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:34.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 2 21:32:42.433: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 21:32:42.438: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 21:32:44.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 21:32:44.444: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 21:32:46.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 21:32:46.442: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 21:32:48.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 21:32:48.442: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 21:32:50.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 21:32:50.442: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:50.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5867" for this suite. • [SLOW TEST:16.131 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1239,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:50.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 2 21:32:50.529: INFO: Waiting up to 5m0s for pod "pod-22cf1882-1bac-4f64-a614-d0cba134ab50" in namespace "emptydir-5643" to be "success or failure" Apr 2 21:32:50.532: INFO: Pod "pod-22cf1882-1bac-4f64-a614-d0cba134ab50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.163617ms Apr 2 21:32:52.536: INFO: Pod "pod-22cf1882-1bac-4f64-a614-d0cba134ab50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007215836s Apr 2 21:32:54.540: INFO: Pod "pod-22cf1882-1bac-4f64-a614-d0cba134ab50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01124133s STEP: Saw pod success Apr 2 21:32:54.540: INFO: Pod "pod-22cf1882-1bac-4f64-a614-d0cba134ab50" satisfied condition "success or failure" Apr 2 21:32:54.544: INFO: Trying to get logs from node jerma-worker2 pod pod-22cf1882-1bac-4f64-a614-d0cba134ab50 container test-container: STEP: delete the pod Apr 2 21:32:54.564: INFO: Waiting for pod pod-22cf1882-1bac-4f64-a614-d0cba134ab50 to disappear Apr 2 21:32:54.580: INFO: Pod pod-22cf1882-1bac-4f64-a614-d0cba134ab50 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:32:54.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5643" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1254,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:32:54.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:32:54.803: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 2 21:32:54.814: INFO: Number of nodes with available pods: 0 Apr 2 21:32:54.814: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 2 21:32:54.847: INFO: Number of nodes with available pods: 0 Apr 2 21:32:54.847: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:32:56.103: INFO: Number of nodes with available pods: 0 Apr 2 21:32:56.103: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:32:56.858: INFO: Number of nodes with available pods: 0 Apr 2 21:32:56.858: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:32:57.875: INFO: Number of nodes with available pods: 0 Apr 2 21:32:57.875: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:32:58.851: INFO: Number of nodes with available pods: 1 Apr 2 21:32:58.851: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 2 21:32:58.879: INFO: Number of nodes with available pods: 1 Apr 2 21:32:58.879: INFO: Number of running nodes: 0, number of available pods: 1 Apr 2 21:32:59.883: INFO: Number of nodes with available pods: 0 Apr 2 21:32:59.883: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 2 21:32:59.904: INFO: Number of nodes with available pods: 0 Apr 2 21:32:59.904: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:00.909: INFO: Number of nodes with available pods: 0 Apr 2 21:33:00.909: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:01.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:01.908: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:02.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:02.908: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:03.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:03.909: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:04.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:04.908: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:05.909: INFO: Number of nodes with available pods: 0 Apr 2 21:33:05.909: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:06.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:06.908: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:07.908: INFO: Number of nodes with available pods: 0 Apr 2 21:33:07.908: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:08.912: INFO: Number of nodes with available pods: 0 Apr 2 21:33:08.912: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:09.909: INFO: Number of nodes with available pods: 0 Apr 2 21:33:09.909: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:10.909: INFO: Number of nodes with available pods: 0 Apr 2 21:33:10.909: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:11.908: INFO: Number of nodes with available pods: 1 Apr 2 21:33:11.908: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8275, will wait for the garbage collector to delete the pods Apr 2 21:33:11.972: INFO: Deleting DaemonSet.extensions daemon-set took: 5.890279ms Apr 2 21:33:12.272: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261908ms Apr 2 21:33:15.576: INFO: Number of nodes with available pods: 0 Apr 2 21:33:15.576: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 21:33:15.579: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8275/daemonsets","resourceVersion":"4851905"},"items":null} Apr 2 21:33:15.582: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8275/pods","resourceVersion":"4851905"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:15.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8275" for this suite. • [SLOW TEST:21.031 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":94,"skipped":1268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:15.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:33:15.717: INFO: Create a RollingUpdate DaemonSet Apr 2 21:33:15.720: INFO: Check that daemon pods launch on every node of the cluster Apr 2 21:33:15.726: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:15.742: INFO: Number of nodes with available pods: 0 Apr 2 21:33:15.742: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:16.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:16.750: INFO: Number of nodes with available pods: 0 Apr 2 21:33:16.750: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:17.810: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:17.815: INFO: Number of nodes with available pods: 0 Apr 2 21:33:17.815: INFO: Node jerma-worker is running more than one daemon pod Apr 2 21:33:18.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:18.753: INFO: Number of nodes with available pods: 1 Apr 2 21:33:18.753: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:33:19.748: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:19.751: INFO: Number of nodes with available pods: 1 Apr 2 21:33:19.751: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 21:33:20.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:20.774: INFO: Number of nodes with available pods: 2 Apr 2 21:33:20.774: INFO: Number of running nodes: 2, number of available pods: 2 Apr 2 21:33:20.774: INFO: Update the DaemonSet to trigger a rollout Apr 2 21:33:20.798: INFO: Updating DaemonSet daemon-set Apr 2 21:33:29.827: INFO: Roll back the DaemonSet before rollout is complete Apr 2 21:33:29.834: INFO: Updating DaemonSet daemon-set Apr 2 21:33:29.834: INFO: Make sure DaemonSet rollback is complete Apr 2 21:33:29.840: INFO: Wrong image for pod: daemon-set-wwlmd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 21:33:29.840: INFO: Pod daemon-set-wwlmd is not available Apr 2 21:33:29.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:30.870: INFO: Wrong image for pod: daemon-set-wwlmd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 21:33:30.870: INFO: Pod daemon-set-wwlmd is not available Apr 2 21:33:30.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:31.867: INFO: Wrong image for pod: daemon-set-wwlmd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 21:33:31.867: INFO: Pod daemon-set-wwlmd is not available Apr 2 21:33:31.930: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:32.868: INFO: Wrong image for pod: daemon-set-wwlmd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 21:33:32.868: INFO: Pod daemon-set-wwlmd is not available Apr 2 21:33:32.872: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 21:33:33.900: INFO: Pod daemon-set-b46zw is not available Apr 2 21:33:33.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-630, will wait for the garbage collector to delete the pods Apr 2 21:33:33.968: INFO: Deleting DaemonSet.extensions daemon-set took: 6.350471ms Apr 2 21:33:34.068: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.266405ms Apr 2 21:33:39.272: INFO: Number of nodes with available pods: 0 Apr 2 21:33:39.272: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 21:33:39.278: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-630/daemonsets","resourceVersion":"4852070"},"items":null} Apr 2 21:33:39.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-630/pods","resourceVersion":"4852070"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:39.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-630" for this suite. • [SLOW TEST:23.689 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":95,"skipped":1310,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:39.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:33:39.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 2 21:33:39.503: INFO: stderr: "" Apr 2 21:33:39.503: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:39.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1174" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":96,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:39.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:33:40.085: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:33:42.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460020, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460020, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460020, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460020, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:33:45.130: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:45.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-544" for this suite. STEP: Destroying namespace "webhook-544-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.839 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":97,"skipped":1343,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:45.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-333.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-333.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-333.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-333.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-333.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-333.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:33:51.576: INFO: DNS probes using dns-333/dns-test-4108f15a-0b09-490c-b8eb-e38d13430ae0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-333" for this suite. • [SLOW TEST:6.332 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":98,"skipped":1359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:51.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:33:51.830: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:33:55.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5846" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1383,"failed":0} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:33:55.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:34:00.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7701" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1384,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:34:00.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 2 21:34:00.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852311 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 2 21:34:00.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852311 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 2 21:34:10.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852364 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 2 21:34:10.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852364 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 2 21:34:20.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852394 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 2 21:34:20.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852394 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 2 21:34:30.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852425 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 2 21:34:30.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-a 36734e8c-e3cd-4c94-9c38-11f3b035f507 4852425 0 2020-04-02 21:34:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 2 21:34:40.129: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-b e5529102-5a0a-46de-a32f-ef199fe6e617 4852461 0 2020-04-02 21:34:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 2 21:34:40.129: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-b e5529102-5a0a-46de-a32f-ef199fe6e617 4852461 0 2020-04-02 21:34:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 2 21:34:50.136: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-b e5529102-5a0a-46de-a32f-ef199fe6e617 4852491 0 2020-04-02 21:34:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 2 21:34:50.136: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-688 /api/v1/namespaces/watch-688/configmaps/e2e-watch-test-configmap-b e5529102-5a0a-46de-a32f-ef199fe6e617 4852491 0 2020-04-02 21:34:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:00.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-688" for this suite. • [SLOW TEST:60.116 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":101,"skipped":1386,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:00.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9919" for this suite. • [SLOW TEST:5.657 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":102,"skipped":1387,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:05.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:05.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4749" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":103,"skipped":1404,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:05.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 2 21:35:05.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 2 21:35:06.151: INFO: stderr: "" Apr 2 21:35:06.151: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9759" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":104,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:06.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-p262 STEP: Creating a pod to test atomic-volume-subpath Apr 2 21:35:06.238: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p262" in namespace "subpath-8928" to be "success or failure" Apr 2 21:35:06.242: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03452ms Apr 2 21:35:08.246: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007983245s Apr 2 21:35:10.250: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 4.012458074s Apr 2 21:35:12.255: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 6.017047526s Apr 2 21:35:14.259: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 8.021145514s Apr 2 21:35:16.263: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 10.025082236s Apr 2 21:35:18.267: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 12.029010378s Apr 2 21:35:20.270: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 14.032114045s Apr 2 21:35:22.274: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 16.036526648s Apr 2 21:35:24.278: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 18.040253369s Apr 2 21:35:26.282: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 20.044682501s Apr 2 21:35:28.286: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Running", Reason="", readiness=true. Elapsed: 22.048525366s Apr 2 21:35:30.295: INFO: Pod "pod-subpath-test-configmap-p262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057557412s STEP: Saw pod success Apr 2 21:35:30.295: INFO: Pod "pod-subpath-test-configmap-p262" satisfied condition "success or failure" Apr 2 21:35:30.298: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-p262 container test-container-subpath-configmap-p262: STEP: delete the pod Apr 2 21:35:30.337: INFO: Waiting for pod pod-subpath-test-configmap-p262 to disappear Apr 2 21:35:30.346: INFO: Pod pod-subpath-test-configmap-p262 no longer exists STEP: Deleting pod pod-subpath-test-configmap-p262 Apr 2 21:35:30.346: INFO: Deleting pod "pod-subpath-test-configmap-p262" in namespace "subpath-8928" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:30.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8928" for this suite. • [SLOW TEST:24.198 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1470,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:30.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:35:31.095: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:35:33.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460131, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460131, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460131, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460131, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:35:36.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 2 21:35:36.207: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:36.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3808" for this suite. STEP: Destroying namespace "webhook-3808-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.958 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":106,"skipped":1483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:36.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 2 21:35:36.394: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8404" to be "success or failure" Apr 2 21:35:36.412: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.740721ms Apr 2 21:35:38.416: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02195178s Apr 2 21:35:40.420: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025700319s STEP: Saw pod success Apr 2 21:35:40.420: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 2 21:35:40.423: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 2 21:35:40.462: INFO: Waiting for pod pod-host-path-test to disappear Apr 2 21:35:40.494: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:40.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8404" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1605,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:40.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:35:40.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a" in namespace "downward-api-5363" to be "success or failure" Apr 2 21:35:40.584: INFO: Pod "downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.707177ms Apr 2 21:35:42.588: INFO: Pod "downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018047365s Apr 2 21:35:44.592: INFO: Pod "downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022455875s STEP: Saw pod success Apr 2 21:35:44.593: INFO: Pod "downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a" satisfied condition "success or failure" Apr 2 21:35:44.596: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a container client-container: STEP: delete the pod Apr 2 21:35:44.628: INFO: Waiting for pod downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a to disappear Apr 2 21:35:44.640: INFO: Pod downwardapi-volume-79b6bcd8-f1ec-4964-a2d9-583f6d5e388a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:44.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5363" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1609,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:44.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:44.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-934" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":109,"skipped":1610,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:44.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 2 21:35:44.817: INFO: Waiting up to 5m0s for pod "var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752" in namespace "var-expansion-604" to be "success or failure" Apr 2 21:35:44.853: INFO: Pod "var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752": Phase="Pending", Reason="", readiness=false. Elapsed: 36.693877ms Apr 2 21:35:46.856: INFO: Pod "var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039683448s Apr 2 21:35:48.860: INFO: Pod "var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042984766s STEP: Saw pod success Apr 2 21:35:48.860: INFO: Pod "var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752" satisfied condition "success or failure" Apr 2 21:35:48.862: INFO: Trying to get logs from node jerma-worker pod var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752 container dapi-container: STEP: delete the pod Apr 2 21:35:48.885: INFO: Waiting for pod var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752 to disappear Apr 2 21:35:48.919: INFO: Pod var-expansion-9a62dac9-aeb4-425e-91f4-2a51a5d85752 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:48.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-604" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:48.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c7ae09ed-7eed-431b-956c-9e144576c98f STEP: Creating a pod to test consume secrets Apr 2 21:35:49.005: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12" in namespace "projected-2742" to be "success or failure" Apr 2 21:35:49.057: INFO: Pod "pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12": Phase="Pending", Reason="", readiness=false. Elapsed: 52.141731ms Apr 2 21:35:51.061: INFO: Pod "pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056226573s Apr 2 21:35:53.065: INFO: Pod "pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05985008s STEP: Saw pod success Apr 2 21:35:53.065: INFO: Pod "pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12" satisfied condition "success or failure" Apr 2 21:35:53.067: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12 container projected-secret-volume-test: STEP: delete the pod Apr 2 21:35:53.096: INFO: Waiting for pod pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12 to disappear Apr 2 21:35:53.118: INFO: Pod pod-projected-secrets-0425e467-7900-4b01-8f4a-15a4ad8c5b12 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:35:53.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2742" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1646,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:35:53.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3332.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3332.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:35:59.237: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.246: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.249: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.252: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.261: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.263: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.291: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.294: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:35:59.300: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:04.306: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.310: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.314: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.317: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.335: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.339: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.360: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:04.368: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:09.305: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.308: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.311: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.314: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.323: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.326: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.328: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.358: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:09.364: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:14.305: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.309: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.313: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.316: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.330: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.333: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.336: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:14.342: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:19.328: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.335: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.338: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.342: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.348: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.350: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.351: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.353: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:19.358: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:24.305: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.309: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.313: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.316: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.329: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.332: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.336: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local from pod dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9: the server could not find the requested resource (get pods dns-test-56622835-6dac-4501-b877-36f482390de9) Apr 2 21:36:24.343: INFO: Lookups using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3332.svc.cluster.local jessie_udp@dns-test-service-2.dns-3332.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3332.svc.cluster.local] Apr 2 21:36:29.339: INFO: DNS probes using dns-3332/dns-test-56622835-6dac-4501-b877-36f482390de9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3332" for this suite. • [SLOW TEST:36.585 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":112,"skipped":1649,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:29.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 2 21:36:29.851: INFO: Waiting up to 5m0s for pod "var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b" in namespace "var-expansion-387" to be "success or failure" Apr 2 21:36:29.867: INFO: Pod "var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.84193ms Apr 2 21:36:31.871: INFO: Pod "var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019812186s Apr 2 21:36:33.896: INFO: Pod "var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044509865s STEP: Saw pod success Apr 2 21:36:33.896: INFO: Pod "var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b" satisfied condition "success or failure" Apr 2 21:36:33.898: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b container dapi-container: STEP: delete the pod Apr 2 21:36:33.916: INFO: Waiting for pod var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b to disappear Apr 2 21:36:33.921: INFO: Pod var-expansion-6c6068cf-5282-452e-8c0f-60bfedcd832b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:33.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-387" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:33.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 2 21:36:34.033: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853192 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 2 21:36:34.033: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853193 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 2 21:36:34.033: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853194 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 2 21:36:44.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853251 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 2 21:36:44.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853252 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 2 21:36:44.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5842 /api/v1/namespaces/watch-5842/configmaps/e2e-watch-test-label-changed 5a15bca4-bd94-4cf3-bed7-ba3205603328 4853253 0 2020-04-02 21:36:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:44.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5842" for this suite. • [SLOW TEST:10.142 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":114,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:44.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 2 21:36:44.128: INFO: Waiting up to 5m0s for pod "pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92" in namespace "emptydir-341" to be "success or failure" Apr 2 21:36:44.132: INFO: Pod "pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125472ms Apr 2 21:36:46.136: INFO: Pod "pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008046742s Apr 2 21:36:48.141: INFO: Pod "pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012192139s STEP: Saw pod success Apr 2 21:36:48.141: INFO: Pod "pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92" satisfied condition "success or failure" Apr 2 21:36:48.144: INFO: Trying to get logs from node jerma-worker2 pod pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92 container test-container: STEP: delete the pod Apr 2 21:36:48.163: INFO: Waiting for pod pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92 to disappear Apr 2 21:36:48.180: INFO: Pod pod-fdf00c8c-d13d-4921-b96e-56c0b1ac8f92 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:48.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-341" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1760,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:48.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5344/configmap-test-e093bb39-05f7-4fc5-95ba-1b4869769705 STEP: Creating a pod to test consume configMaps Apr 2 21:36:48.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a" in namespace "configmap-5344" to be "success or failure" Apr 2 21:36:48.276: INFO: Pod "pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74865ms Apr 2 21:36:50.281: INFO: Pod "pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008730747s Apr 2 21:36:52.285: INFO: Pod "pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012734953s STEP: Saw pod success Apr 2 21:36:52.285: INFO: Pod "pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a" satisfied condition "success or failure" Apr 2 21:36:52.288: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a container env-test: STEP: delete the pod Apr 2 21:36:52.305: INFO: Waiting for pod pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a to disappear Apr 2 21:36:52.330: INFO: Pod pod-configmaps-c8ae1644-b4f0-4dbb-9fab-c11068f8615a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:52.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5344" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1773,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:52.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 21:36:52.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-907' Apr 2 21:36:52.519: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 2 21:36:52.519: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 2 21:36:52.568: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-ghmtz] Apr 2 21:36:52.568: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-ghmtz" in namespace "kubectl-907" to be "running and ready" Apr 2 21:36:52.570: INFO: Pod "e2e-test-httpd-rc-ghmtz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177111ms Apr 2 21:36:54.574: INFO: Pod "e2e-test-httpd-rc-ghmtz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006392751s Apr 2 21:36:56.578: INFO: Pod "e2e-test-httpd-rc-ghmtz": Phase="Running", Reason="", readiness=true. Elapsed: 4.010346878s Apr 2 21:36:56.578: INFO: Pod "e2e-test-httpd-rc-ghmtz" satisfied condition "running and ready" Apr 2 21:36:56.579: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-ghmtz] Apr 2 21:36:56.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-907' Apr 2 21:36:56.704: INFO: stderr: "" Apr 2 21:36:56.704: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.148. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.148. Set the 'ServerName' directive globally to suppress this message\n[Thu Apr 02 21:36:54.747990 2020] [mpm_event:notice] [pid 1:tid 140567938354024] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Apr 02 21:36:54.748045 2020] [core:notice] [pid 1:tid 140567938354024] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Apr 2 21:36:56.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-907' Apr 2 21:36:56.878: INFO: stderr: "" Apr 2 21:36:56.878: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:36:56.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-907" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":117,"skipped":1778,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:36:56.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 2 21:37:07.074: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.074: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.116105 6 log.go:172] (0xc003e74c60) (0xc001c2ff40) Create stream I0402 21:37:07.116132 6 log.go:172] (0xc003e74c60) (0xc001c2ff40) Stream added, broadcasting: 1 I0402 21:37:07.117840 6 log.go:172] (0xc003e74c60) Reply frame received for 1 I0402 21:37:07.117871 6 log.go:172] (0xc003e74c60) (0xc00137a000) Create stream I0402 21:37:07.117892 6 log.go:172] (0xc003e74c60) (0xc00137a000) Stream added, broadcasting: 3 I0402 21:37:07.118810 6 log.go:172] (0xc003e74c60) Reply frame received for 3 I0402 21:37:07.118853 6 log.go:172] (0xc003e74c60) (0xc000d120a0) Create stream I0402 21:37:07.118869 6 log.go:172] (0xc003e74c60) (0xc000d120a0) Stream added, broadcasting: 5 I0402 21:37:07.119739 6 log.go:172] (0xc003e74c60) Reply frame received for 5 I0402 21:37:07.225658 6 log.go:172] (0xc003e74c60) Data frame received for 3 I0402 21:37:07.225702 6 log.go:172] (0xc00137a000) (3) Data frame handling I0402 21:37:07.225721 6 log.go:172] (0xc00137a000) (3) Data frame sent I0402 21:37:07.225734 6 log.go:172] (0xc003e74c60) Data frame received for 3 I0402 21:37:07.225753 6 log.go:172] (0xc00137a000) (3) Data frame handling I0402 21:37:07.225786 6 log.go:172] (0xc003e74c60) Data frame received for 5 I0402 21:37:07.225816 6 log.go:172] (0xc000d120a0) (5) Data frame handling I0402 21:37:07.226761 6 log.go:172] (0xc003e74c60) Data frame received for 1 I0402 21:37:07.226784 6 log.go:172] (0xc001c2ff40) (1) Data frame handling I0402 21:37:07.226801 6 log.go:172] (0xc001c2ff40) (1) Data frame sent I0402 21:37:07.226972 6 log.go:172] (0xc003e74c60) (0xc001c2ff40) Stream removed, broadcasting: 1 I0402 21:37:07.227001 6 log.go:172] (0xc003e74c60) Go away received I0402 21:37:07.227068 6 log.go:172] (0xc003e74c60) (0xc001c2ff40) Stream removed, broadcasting: 1 I0402 21:37:07.227100 6 log.go:172] (0xc003e74c60) (0xc00137a000) Stream removed, broadcasting: 3 I0402 21:37:07.227114 6 log.go:172] (0xc003e74c60) (0xc000d120a0) Stream removed, broadcasting: 5 Apr 2 21:37:07.227: INFO: Exec stderr: "" Apr 2 21:37:07.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.227: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.255134 6 log.go:172] (0xc003e75290) (0xc000d12640) Create stream I0402 21:37:07.255165 6 log.go:172] (0xc003e75290) (0xc000d12640) Stream added, broadcasting: 1 I0402 21:37:07.261611 6 log.go:172] (0xc003e75290) Reply frame received for 1 I0402 21:37:07.261654 6 log.go:172] (0xc003e75290) (0xc00137a280) Create stream I0402 21:37:07.261666 6 log.go:172] (0xc003e75290) (0xc00137a280) Stream added, broadcasting: 3 I0402 21:37:07.262729 6 log.go:172] (0xc003e75290) Reply frame received for 3 I0402 21:37:07.262758 6 log.go:172] (0xc003e75290) (0xc000d126e0) Create stream I0402 21:37:07.262780 6 log.go:172] (0xc003e75290) (0xc000d126e0) Stream added, broadcasting: 5 I0402 21:37:07.264640 6 log.go:172] (0xc003e75290) Reply frame received for 5 I0402 21:37:07.312360 6 log.go:172] (0xc003e75290) Data frame received for 3 I0402 21:37:07.312403 6 log.go:172] (0xc00137a280) (3) Data frame handling I0402 21:37:07.312417 6 log.go:172] (0xc00137a280) (3) Data frame sent I0402 21:37:07.312425 6 log.go:172] (0xc003e75290) Data frame received for 3 I0402 21:37:07.312449 6 log.go:172] (0xc00137a280) (3) Data frame handling I0402 21:37:07.312493 6 log.go:172] (0xc003e75290) Data frame received for 5 I0402 21:37:07.312513 6 log.go:172] (0xc000d126e0) (5) Data frame handling I0402 21:37:07.314120 6 log.go:172] (0xc003e75290) Data frame received for 1 I0402 21:37:07.314144 6 log.go:172] (0xc000d12640) (1) Data frame handling I0402 21:37:07.314157 6 log.go:172] (0xc000d12640) (1) Data frame sent I0402 21:37:07.314168 6 log.go:172] (0xc003e75290) (0xc000d12640) Stream removed, broadcasting: 1 I0402 21:37:07.314270 6 log.go:172] (0xc003e75290) (0xc000d12640) Stream removed, broadcasting: 1 I0402 21:37:07.314285 6 log.go:172] (0xc003e75290) (0xc00137a280) Stream removed, broadcasting: 3 I0402 21:37:07.314296 6 log.go:172] (0xc003e75290) (0xc000d126e0) Stream removed, broadcasting: 5 Apr 2 21:37:07.314: INFO: Exec stderr: "" Apr 2 21:37:07.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.314: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.314379 6 log.go:172] (0xc003e75290) Go away received I0402 21:37:07.344841 6 log.go:172] (0xc001bda4d0) (0xc000e9e500) Create stream I0402 21:37:07.344876 6 log.go:172] (0xc001bda4d0) (0xc000e9e500) Stream added, broadcasting: 1 I0402 21:37:07.346793 6 log.go:172] (0xc001bda4d0) Reply frame received for 1 I0402 21:37:07.346861 6 log.go:172] (0xc001bda4d0) (0xc000e9e5a0) Create stream I0402 21:37:07.346884 6 log.go:172] (0xc001bda4d0) (0xc000e9e5a0) Stream added, broadcasting: 3 I0402 21:37:07.347699 6 log.go:172] (0xc001bda4d0) Reply frame received for 3 I0402 21:37:07.347727 6 log.go:172] (0xc001bda4d0) (0xc000e9e780) Create stream I0402 21:37:07.347736 6 log.go:172] (0xc001bda4d0) (0xc000e9e780) Stream added, broadcasting: 5 I0402 21:37:07.348575 6 log.go:172] (0xc001bda4d0) Reply frame received for 5 I0402 21:37:07.427346 6 log.go:172] (0xc001bda4d0) Data frame received for 5 I0402 21:37:07.427378 6 log.go:172] (0xc000e9e780) (5) Data frame handling I0402 21:37:07.427453 6 log.go:172] (0xc001bda4d0) Data frame received for 3 I0402 21:37:07.427504 6 log.go:172] (0xc000e9e5a0) (3) Data frame handling I0402 21:37:07.427535 6 log.go:172] (0xc000e9e5a0) (3) Data frame sent I0402 21:37:07.427549 6 log.go:172] (0xc001bda4d0) Data frame received for 3 I0402 21:37:07.427560 6 log.go:172] (0xc000e9e5a0) (3) Data frame handling I0402 21:37:07.429284 6 log.go:172] (0xc001bda4d0) Data frame received for 1 I0402 21:37:07.429308 6 log.go:172] (0xc000e9e500) (1) Data frame handling I0402 21:37:07.429321 6 log.go:172] (0xc000e9e500) (1) Data frame sent I0402 21:37:07.429336 6 log.go:172] (0xc001bda4d0) (0xc000e9e500) Stream removed, broadcasting: 1 I0402 21:37:07.429371 6 log.go:172] (0xc001bda4d0) Go away received I0402 21:37:07.429495 6 log.go:172] (0xc001bda4d0) (0xc000e9e500) Stream removed, broadcasting: 1 I0402 21:37:07.429514 6 log.go:172] (0xc001bda4d0) (0xc000e9e5a0) Stream removed, broadcasting: 3 I0402 21:37:07.429522 6 log.go:172] (0xc001bda4d0) (0xc000e9e780) Stream removed, broadcasting: 5 Apr 2 21:37:07.429: INFO: Exec stderr: "" Apr 2 21:37:07.429: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.429: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.455362 6 log.go:172] (0xc005c828f0) (0xc00167d040) Create stream I0402 21:37:07.455387 6 log.go:172] (0xc005c828f0) (0xc00167d040) Stream added, broadcasting: 1 I0402 21:37:07.456953 6 log.go:172] (0xc005c828f0) Reply frame received for 1 I0402 21:37:07.457005 6 log.go:172] (0xc005c828f0) (0xc000d12780) Create stream I0402 21:37:07.457028 6 log.go:172] (0xc005c828f0) (0xc000d12780) Stream added, broadcasting: 3 I0402 21:37:07.458228 6 log.go:172] (0xc005c828f0) Reply frame received for 3 I0402 21:37:07.458285 6 log.go:172] (0xc005c828f0) (0xc00167d180) Create stream I0402 21:37:07.458300 6 log.go:172] (0xc005c828f0) (0xc00167d180) Stream added, broadcasting: 5 I0402 21:37:07.459250 6 log.go:172] (0xc005c828f0) Reply frame received for 5 I0402 21:37:07.521832 6 log.go:172] (0xc005c828f0) Data frame received for 3 I0402 21:37:07.521869 6 log.go:172] (0xc000d12780) (3) Data frame handling I0402 21:37:07.521881 6 log.go:172] (0xc000d12780) (3) Data frame sent I0402 21:37:07.521894 6 log.go:172] (0xc005c828f0) Data frame received for 3 I0402 21:37:07.521904 6 log.go:172] (0xc000d12780) (3) Data frame handling I0402 21:37:07.521958 6 log.go:172] (0xc005c828f0) Data frame received for 5 I0402 21:37:07.521983 6 log.go:172] (0xc00167d180) (5) Data frame handling I0402 21:37:07.523492 6 log.go:172] (0xc005c828f0) Data frame received for 1 I0402 21:37:07.523535 6 log.go:172] (0xc00167d040) (1) Data frame handling I0402 21:37:07.523564 6 log.go:172] (0xc00167d040) (1) Data frame sent I0402 21:37:07.523600 6 log.go:172] (0xc005c828f0) (0xc00167d040) Stream removed, broadcasting: 1 I0402 21:37:07.523632 6 log.go:172] (0xc005c828f0) Go away received I0402 21:37:07.523766 6 log.go:172] (0xc005c828f0) (0xc00167d040) Stream removed, broadcasting: 1 I0402 21:37:07.523799 6 log.go:172] (0xc005c828f0) (0xc000d12780) Stream removed, broadcasting: 3 I0402 21:37:07.523812 6 log.go:172] (0xc005c828f0) (0xc00167d180) Stream removed, broadcasting: 5 Apr 2 21:37:07.523: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 2 21:37:07.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.523: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.555887 6 log.go:172] (0xc003e758c0) (0xc000d12a00) Create stream I0402 21:37:07.555911 6 log.go:172] (0xc003e758c0) (0xc000d12a00) Stream added, broadcasting: 1 I0402 21:37:07.557824 6 log.go:172] (0xc003e758c0) Reply frame received for 1 I0402 21:37:07.557884 6 log.go:172] (0xc003e758c0) (0xc00137a780) Create stream I0402 21:37:07.557897 6 log.go:172] (0xc003e758c0) (0xc00137a780) Stream added, broadcasting: 3 I0402 21:37:07.559026 6 log.go:172] (0xc003e758c0) Reply frame received for 3 I0402 21:37:07.559076 6 log.go:172] (0xc003e758c0) (0xc00167d360) Create stream I0402 21:37:07.559093 6 log.go:172] (0xc003e758c0) (0xc00167d360) Stream added, broadcasting: 5 I0402 21:37:07.560090 6 log.go:172] (0xc003e758c0) Reply frame received for 5 I0402 21:37:07.610937 6 log.go:172] (0xc003e758c0) Data frame received for 5 I0402 21:37:07.610975 6 log.go:172] (0xc00167d360) (5) Data frame handling I0402 21:37:07.611012 6 log.go:172] (0xc003e758c0) Data frame received for 3 I0402 21:37:07.611034 6 log.go:172] (0xc00137a780) (3) Data frame handling I0402 21:37:07.611067 6 log.go:172] (0xc00137a780) (3) Data frame sent I0402 21:37:07.611095 6 log.go:172] (0xc003e758c0) Data frame received for 3 I0402 21:37:07.611116 6 log.go:172] (0xc00137a780) (3) Data frame handling I0402 21:37:07.612762 6 log.go:172] (0xc003e758c0) Data frame received for 1 I0402 21:37:07.612795 6 log.go:172] (0xc000d12a00) (1) Data frame handling I0402 21:37:07.612824 6 log.go:172] (0xc000d12a00) (1) Data frame sent I0402 21:37:07.612880 6 log.go:172] (0xc003e758c0) (0xc000d12a00) Stream removed, broadcasting: 1 I0402 21:37:07.612905 6 log.go:172] (0xc003e758c0) Go away received I0402 21:37:07.613059 6 log.go:172] (0xc003e758c0) (0xc000d12a00) Stream removed, broadcasting: 1 I0402 21:37:07.613100 6 log.go:172] (0xc003e758c0) (0xc00137a780) Stream removed, broadcasting: 3 I0402 21:37:07.613297 6 log.go:172] (0xc003e758c0) (0xc00167d360) Stream removed, broadcasting: 5 Apr 2 21:37:07.613: INFO: Exec stderr: "" Apr 2 21:37:07.613: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.613: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.650668 6 log.go:172] (0xc001bdae70) (0xc000e9ed20) Create stream I0402 21:37:07.650695 6 log.go:172] (0xc001bdae70) (0xc000e9ed20) Stream added, broadcasting: 1 I0402 21:37:07.652722 6 log.go:172] (0xc001bdae70) Reply frame received for 1 I0402 21:37:07.652772 6 log.go:172] (0xc001bdae70) (0xc000e9ee60) Create stream I0402 21:37:07.652788 6 log.go:172] (0xc001bdae70) (0xc000e9ee60) Stream added, broadcasting: 3 I0402 21:37:07.654031 6 log.go:172] (0xc001bdae70) Reply frame received for 3 I0402 21:37:07.654081 6 log.go:172] (0xc001bdae70) (0xc000d12c80) Create stream I0402 21:37:07.654120 6 log.go:172] (0xc001bdae70) (0xc000d12c80) Stream added, broadcasting: 5 I0402 21:37:07.655212 6 log.go:172] (0xc001bdae70) Reply frame received for 5 I0402 21:37:07.714719 6 log.go:172] (0xc001bdae70) Data frame received for 3 I0402 21:37:07.714751 6 log.go:172] (0xc000e9ee60) (3) Data frame handling I0402 21:37:07.714778 6 log.go:172] (0xc001bdae70) Data frame received for 5 I0402 21:37:07.714812 6 log.go:172] (0xc000d12c80) (5) Data frame handling I0402 21:37:07.714840 6 log.go:172] (0xc000e9ee60) (3) Data frame sent I0402 21:37:07.714922 6 log.go:172] (0xc001bdae70) Data frame received for 3 I0402 21:37:07.714947 6 log.go:172] (0xc000e9ee60) (3) Data frame handling I0402 21:37:07.716364 6 log.go:172] (0xc001bdae70) Data frame received for 1 I0402 21:37:07.716394 6 log.go:172] (0xc000e9ed20) (1) Data frame handling I0402 21:37:07.716407 6 log.go:172] (0xc000e9ed20) (1) Data frame sent I0402 21:37:07.716441 6 log.go:172] (0xc001bdae70) (0xc000e9ed20) Stream removed, broadcasting: 1 I0402 21:37:07.716458 6 log.go:172] (0xc001bdae70) Go away received I0402 21:37:07.716580 6 log.go:172] (0xc001bdae70) (0xc000e9ed20) Stream removed, broadcasting: 1 I0402 21:37:07.716608 6 log.go:172] (0xc001bdae70) (0xc000e9ee60) Stream removed, broadcasting: 3 I0402 21:37:07.716625 6 log.go:172] (0xc001bdae70) (0xc000d12c80) Stream removed, broadcasting: 5 Apr 2 21:37:07.716: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 2 21:37:07.716: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.716: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.748223 6 log.go:172] (0xc002354790) (0xc00137adc0) Create stream I0402 21:37:07.748257 6 log.go:172] (0xc002354790) (0xc00137adc0) Stream added, broadcasting: 1 I0402 21:37:07.752592 6 log.go:172] (0xc002354790) Reply frame received for 1 I0402 21:37:07.752644 6 log.go:172] (0xc002354790) (0xc000d76000) Create stream I0402 21:37:07.752682 6 log.go:172] (0xc002354790) (0xc000d76000) Stream added, broadcasting: 3 I0402 21:37:07.756811 6 log.go:172] (0xc002354790) Reply frame received for 3 I0402 21:37:07.756926 6 log.go:172] (0xc002354790) (0xc000958140) Create stream I0402 21:37:07.756964 6 log.go:172] (0xc002354790) (0xc000958140) Stream added, broadcasting: 5 I0402 21:37:07.758491 6 log.go:172] (0xc002354790) Reply frame received for 5 I0402 21:37:07.825944 6 log.go:172] (0xc002354790) Data frame received for 3 I0402 21:37:07.825989 6 log.go:172] (0xc000d76000) (3) Data frame handling I0402 21:37:07.826007 6 log.go:172] (0xc000d76000) (3) Data frame sent I0402 21:37:07.826041 6 log.go:172] (0xc002354790) Data frame received for 5 I0402 21:37:07.826110 6 log.go:172] (0xc000958140) (5) Data frame handling I0402 21:37:07.826157 6 log.go:172] (0xc002354790) Data frame received for 3 I0402 21:37:07.826183 6 log.go:172] (0xc000d76000) (3) Data frame handling I0402 21:37:07.827565 6 log.go:172] (0xc002354790) Data frame received for 1 I0402 21:37:07.827582 6 log.go:172] (0xc00137adc0) (1) Data frame handling I0402 21:37:07.827606 6 log.go:172] (0xc00137adc0) (1) Data frame sent I0402 21:37:07.827619 6 log.go:172] (0xc002354790) (0xc00137adc0) Stream removed, broadcasting: 1 I0402 21:37:07.827693 6 log.go:172] (0xc002354790) Go away received I0402 21:37:07.827752 6 log.go:172] (0xc002354790) (0xc00137adc0) Stream removed, broadcasting: 1 I0402 21:37:07.827790 6 log.go:172] (0xc002354790) (0xc000d76000) Stream removed, broadcasting: 3 I0402 21:37:07.827807 6 log.go:172] (0xc002354790) (0xc000958140) Stream removed, broadcasting: 5 Apr 2 21:37:07.827: INFO: Exec stderr: "" Apr 2 21:37:07.827: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.827: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.860830 6 log.go:172] (0xc002141ef0) (0xc001ea4140) Create stream I0402 21:37:07.860855 6 log.go:172] (0xc002141ef0) (0xc001ea4140) Stream added, broadcasting: 1 I0402 21:37:07.862821 6 log.go:172] (0xc002141ef0) Reply frame received for 1 I0402 21:37:07.862863 6 log.go:172] (0xc002141ef0) (0xc000e52280) Create stream I0402 21:37:07.862880 6 log.go:172] (0xc002141ef0) (0xc000e52280) Stream added, broadcasting: 3 I0402 21:37:07.863755 6 log.go:172] (0xc002141ef0) Reply frame received for 3 I0402 21:37:07.863789 6 log.go:172] (0xc002141ef0) (0xc000e52320) Create stream I0402 21:37:07.863802 6 log.go:172] (0xc002141ef0) (0xc000e52320) Stream added, broadcasting: 5 I0402 21:37:07.864791 6 log.go:172] (0xc002141ef0) Reply frame received for 5 I0402 21:37:07.927533 6 log.go:172] (0xc002141ef0) Data frame received for 5 I0402 21:37:07.927574 6 log.go:172] (0xc000e52320) (5) Data frame handling I0402 21:37:07.927601 6 log.go:172] (0xc002141ef0) Data frame received for 3 I0402 21:37:07.927617 6 log.go:172] (0xc000e52280) (3) Data frame handling I0402 21:37:07.927638 6 log.go:172] (0xc000e52280) (3) Data frame sent I0402 21:37:07.927657 6 log.go:172] (0xc002141ef0) Data frame received for 3 I0402 21:37:07.927673 6 log.go:172] (0xc000e52280) (3) Data frame handling I0402 21:37:07.928853 6 log.go:172] (0xc002141ef0) Data frame received for 1 I0402 21:37:07.928875 6 log.go:172] (0xc001ea4140) (1) Data frame handling I0402 21:37:07.928895 6 log.go:172] (0xc001ea4140) (1) Data frame sent I0402 21:37:07.928916 6 log.go:172] (0xc002141ef0) (0xc001ea4140) Stream removed, broadcasting: 1 I0402 21:37:07.928932 6 log.go:172] (0xc002141ef0) Go away received I0402 21:37:07.929056 6 log.go:172] (0xc002141ef0) (0xc001ea4140) Stream removed, broadcasting: 1 I0402 21:37:07.929075 6 log.go:172] (0xc002141ef0) (0xc000e52280) Stream removed, broadcasting: 3 I0402 21:37:07.929083 6 log.go:172] (0xc002141ef0) (0xc000e52320) Stream removed, broadcasting: 5 Apr 2 21:37:07.929: INFO: Exec stderr: "" Apr 2 21:37:07.929: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:07.929: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:07.956790 6 log.go:172] (0xc002354630) (0xc001ea4780) Create stream I0402 21:37:07.956824 6 log.go:172] (0xc002354630) (0xc001ea4780) Stream added, broadcasting: 1 I0402 21:37:07.958748 6 log.go:172] (0xc002354630) Reply frame received for 1 I0402 21:37:07.958801 6 log.go:172] (0xc002354630) (0xc001c2e140) Create stream I0402 21:37:07.958818 6 log.go:172] (0xc002354630) (0xc001c2e140) Stream added, broadcasting: 3 I0402 21:37:07.959762 6 log.go:172] (0xc002354630) Reply frame received for 3 I0402 21:37:07.959823 6 log.go:172] (0xc002354630) (0xc0016ee000) Create stream I0402 21:37:07.959852 6 log.go:172] (0xc002354630) (0xc0016ee000) Stream added, broadcasting: 5 I0402 21:37:07.960876 6 log.go:172] (0xc002354630) Reply frame received for 5 I0402 21:37:08.008108 6 log.go:172] (0xc002354630) Data frame received for 5 I0402 21:37:08.008150 6 log.go:172] (0xc0016ee000) (5) Data frame handling I0402 21:37:08.008172 6 log.go:172] (0xc002354630) Data frame received for 3 I0402 21:37:08.008189 6 log.go:172] (0xc001c2e140) (3) Data frame handling I0402 21:37:08.008210 6 log.go:172] (0xc001c2e140) (3) Data frame sent I0402 21:37:08.008223 6 log.go:172] (0xc002354630) Data frame received for 3 I0402 21:37:08.008230 6 log.go:172] (0xc001c2e140) (3) Data frame handling I0402 21:37:08.009619 6 log.go:172] (0xc002354630) Data frame received for 1 I0402 21:37:08.009655 6 log.go:172] (0xc001ea4780) (1) Data frame handling I0402 21:37:08.009678 6 log.go:172] (0xc001ea4780) (1) Data frame sent I0402 21:37:08.009701 6 log.go:172] (0xc002354630) (0xc001ea4780) Stream removed, broadcasting: 1 I0402 21:37:08.009722 6 log.go:172] (0xc002354630) Go away received I0402 21:37:08.009865 6 log.go:172] (0xc002354630) (0xc001ea4780) Stream removed, broadcasting: 1 I0402 21:37:08.009889 6 log.go:172] (0xc002354630) (0xc001c2e140) Stream removed, broadcasting: 3 I0402 21:37:08.009908 6 log.go:172] (0xc002354630) (0xc0016ee000) Stream removed, broadcasting: 5 Apr 2 21:37:08.009: INFO: Exec stderr: "" Apr 2 21:37:08.009: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:37:08.010: INFO: >>> kubeConfig: /root/.kube/config I0402 21:37:08.041106 6 log.go:172] (0xc001a84420) (0xc000e52960) Create stream I0402 21:37:08.041304 6 log.go:172] (0xc001a84420) (0xc000e52960) Stream added, broadcasting: 1 I0402 21:37:08.043807 6 log.go:172] (0xc001a84420) Reply frame received for 1 I0402 21:37:08.043850 6 log.go:172] (0xc001a84420) (0xc0016ee0a0) Create stream I0402 21:37:08.043866 6 log.go:172] (0xc001a84420) (0xc0016ee0a0) Stream added, broadcasting: 3 I0402 21:37:08.045040 6 log.go:172] (0xc001a84420) Reply frame received for 3 I0402 21:37:08.045084 6 log.go:172] (0xc001a84420) (0xc001c2e6e0) Create stream I0402 21:37:08.045098 6 log.go:172] (0xc001a84420) (0xc001c2e6e0) Stream added, broadcasting: 5 I0402 21:37:08.046283 6 log.go:172] (0xc001a84420) Reply frame received for 5 I0402 21:37:08.107839 6 log.go:172] (0xc001a84420) Data frame received for 5 I0402 21:37:08.107872 6 log.go:172] (0xc001c2e6e0) (5) Data frame handling I0402 21:37:08.107901 6 log.go:172] (0xc001a84420) Data frame received for 3 I0402 21:37:08.107915 6 log.go:172] (0xc0016ee0a0) (3) Data frame handling I0402 21:37:08.107934 6 log.go:172] (0xc0016ee0a0) (3) Data frame sent I0402 21:37:08.107950 6 log.go:172] (0xc001a84420) Data frame received for 3 I0402 21:37:08.107960 6 log.go:172] (0xc0016ee0a0) (3) Data frame handling I0402 21:37:08.109640 6 log.go:172] (0xc001a84420) Data frame received for 1 I0402 21:37:08.109696 6 log.go:172] (0xc000e52960) (1) Data frame handling I0402 21:37:08.109719 6 log.go:172] (0xc000e52960) (1) Data frame sent I0402 21:37:08.109777 6 log.go:172] (0xc001a84420) (0xc000e52960) Stream removed, broadcasting: 1 I0402 21:37:08.109817 6 log.go:172] (0xc001a84420) Go away received I0402 21:37:08.109904 6 log.go:172] (0xc001a84420) (0xc000e52960) Stream removed, broadcasting: 1 I0402 21:37:08.109929 6 log.go:172] (0xc001a84420) (0xc0016ee0a0) Stream removed, broadcasting: 3 I0402 21:37:08.109940 6 log.go:172] (0xc001a84420) (0xc001c2e6e0) Stream removed, broadcasting: 5 Apr 2 21:37:08.109: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:08.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6026" for this suite. • [SLOW TEST:11.219 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1785,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:08.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 2 21:37:08.166: INFO: Waiting up to 5m0s for pod "downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8" in namespace "downward-api-8395" to be "success or failure" Apr 2 21:37:08.169: INFO: Pod "downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591277ms Apr 2 21:37:10.178: INFO: Pod "downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012354498s Apr 2 21:37:12.182: INFO: Pod "downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016916481s STEP: Saw pod success Apr 2 21:37:12.183: INFO: Pod "downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8" satisfied condition "success or failure" Apr 2 21:37:12.186: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8 container dapi-container: STEP: delete the pod Apr 2 21:37:12.207: INFO: Waiting for pod downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8 to disappear Apr 2 21:37:12.232: INFO: Pod downward-api-3c09444e-f9fa-4fc6-a92f-cc545a57a6a8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:12.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8395" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1787,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:12.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:37:12.928: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:37:14.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460232, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460232, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460233, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460232, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:37:17.972: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:18.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6313" for this suite. STEP: Destroying namespace "webhook-6313-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":120,"skipped":1787,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:18.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:37:18.257: INFO: Creating ReplicaSet my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce Apr 2 21:37:18.277: INFO: Pod name my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce: Found 0 pods out of 1 Apr 2 21:37:23.290: INFO: Pod name my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce: Found 1 pods out of 1 Apr 2 21:37:23.290: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce" is running Apr 2 21:37:23.293: INFO: Pod "my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce-nvrqb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:37:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:37:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:37:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 21:37:18 +0000 UTC Reason: Message:}]) Apr 2 21:37:23.293: INFO: Trying to dial the pod Apr 2 21:37:28.306: INFO: Controller my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce: Got expected result from replica 1 [my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce-nvrqb]: "my-hostname-basic-8df70429-c658-450a-914e-3a8f877958ce-nvrqb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:28.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1386" for this suite. • [SLOW TEST:10.096 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":121,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:28.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:32.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7343" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":122,"skipped":1879,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:32.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:37:33.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Apr 2 21:37:35.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:37:37.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460253, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:37:40.560: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:37:40.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:41.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-383" for this suite. STEP: Destroying namespace "webhook-383-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.294 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":123,"skipped":1891,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:41.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:53.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8701" for this suite. • [SLOW TEST:11.224 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":124,"skipped":1913,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:53.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:37:53.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4340" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:37:53.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8424 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8424 STEP: creating replication controller externalsvc in namespace services-8424 I0402 21:37:53.399567 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8424, replica count: 2 I0402 21:37:56.450033 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:37:59.450252 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 2 21:37:59.505: INFO: Creating new exec pod Apr 2 21:38:03.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8424 execpod52k72 -- /bin/sh -x -c nslookup nodeport-service' Apr 2 21:38:03.768: INFO: stderr: "I0402 21:38:03.662073 1420 log.go:172] (0xc00099a000) (0xc00090c000) Create stream\nI0402 21:38:03.662133 1420 log.go:172] (0xc00099a000) (0xc00090c000) Stream added, broadcasting: 1\nI0402 21:38:03.664838 1420 log.go:172] (0xc00099a000) Reply frame received for 1\nI0402 21:38:03.664886 1420 log.go:172] (0xc00099a000) (0xc000a9a000) Create stream\nI0402 21:38:03.664902 1420 log.go:172] (0xc00099a000) (0xc000a9a000) Stream added, broadcasting: 3\nI0402 21:38:03.666184 1420 log.go:172] (0xc00099a000) Reply frame received for 3\nI0402 21:38:03.666222 1420 log.go:172] (0xc00099a000) (0xc000a9a0a0) Create stream\nI0402 21:38:03.666239 1420 log.go:172] (0xc00099a000) (0xc000a9a0a0) Stream added, broadcasting: 5\nI0402 21:38:03.667264 1420 log.go:172] (0xc00099a000) Reply frame received for 5\nI0402 21:38:03.751934 1420 log.go:172] (0xc00099a000) Data frame received for 5\nI0402 21:38:03.751986 1420 log.go:172] (0xc000a9a0a0) (5) Data frame handling\nI0402 21:38:03.752018 1420 log.go:172] (0xc000a9a0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0402 21:38:03.759217 1420 log.go:172] (0xc00099a000) Data frame received for 3\nI0402 21:38:03.759246 1420 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0402 21:38:03.759298 1420 log.go:172] (0xc000a9a000) (3) Data frame sent\nI0402 21:38:03.760500 1420 log.go:172] (0xc00099a000) Data frame received for 3\nI0402 21:38:03.760512 1420 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0402 21:38:03.760526 1420 log.go:172] (0xc000a9a000) (3) Data frame sent\nI0402 21:38:03.761397 1420 log.go:172] (0xc00099a000) Data frame received for 3\nI0402 21:38:03.761434 1420 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0402 21:38:03.761465 1420 log.go:172] (0xc00099a000) Data frame received for 5\nI0402 21:38:03.761485 1420 log.go:172] (0xc000a9a0a0) (5) Data frame handling\nI0402 21:38:03.763302 1420 log.go:172] (0xc00099a000) Data frame received for 1\nI0402 21:38:03.763332 1420 log.go:172] (0xc00090c000) (1) Data frame handling\nI0402 21:38:03.763363 1420 log.go:172] (0xc00090c000) (1) Data frame sent\nI0402 21:38:03.763383 1420 log.go:172] (0xc00099a000) (0xc00090c000) Stream removed, broadcasting: 1\nI0402 21:38:03.763407 1420 log.go:172] (0xc00099a000) Go away received\nI0402 21:38:03.763802 1420 log.go:172] (0xc00099a000) (0xc00090c000) Stream removed, broadcasting: 1\nI0402 21:38:03.763823 1420 log.go:172] (0xc00099a000) (0xc000a9a000) Stream removed, broadcasting: 3\nI0402 21:38:03.763833 1420 log.go:172] (0xc00099a000) (0xc000a9a0a0) Stream removed, broadcasting: 5\n" Apr 2 21:38:03.768: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8424.svc.cluster.local\tcanonical name = externalsvc.services-8424.svc.cluster.local.\nName:\texternalsvc.services-8424.svc.cluster.local\nAddress: 10.108.84.62\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8424, will wait for the garbage collector to delete the pods Apr 2 21:38:03.828: INFO: Deleting ReplicationController externalsvc took: 6.452717ms Apr 2 21:38:03.928: INFO: Terminating ReplicationController externalsvc pods took: 100.22396ms Apr 2 21:38:19.583: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:19.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8424" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.408 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":126,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:19.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 2 21:38:23.710: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8479 PodName:pod-sharedvolume-d90ae20f-9d99-43c8-a244-6df33b1baa32 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 21:38:23.710: INFO: >>> kubeConfig: /root/.kube/config I0402 21:38:23.742236 6 log.go:172] (0xc003e74000) (0xc001e67860) Create stream I0402 21:38:23.742266 6 log.go:172] (0xc003e74000) (0xc001e67860) Stream added, broadcasting: 1 I0402 21:38:23.743982 6 log.go:172] (0xc003e74000) Reply frame received for 1 I0402 21:38:23.744020 6 log.go:172] (0xc003e74000) (0xc001e679a0) Create stream I0402 21:38:23.744035 6 log.go:172] (0xc003e74000) (0xc001e679a0) Stream added, broadcasting: 3 I0402 21:38:23.744989 6 log.go:172] (0xc003e74000) Reply frame received for 3 I0402 21:38:23.745008 6 log.go:172] (0xc003e74000) (0xc001e67cc0) Create stream I0402 21:38:23.745014 6 log.go:172] (0xc003e74000) (0xc001e67cc0) Stream added, broadcasting: 5 I0402 21:38:23.746140 6 log.go:172] (0xc003e74000) Reply frame received for 5 I0402 21:38:23.815620 6 log.go:172] (0xc003e74000) Data frame received for 5 I0402 21:38:23.815655 6 log.go:172] (0xc001e67cc0) (5) Data frame handling I0402 21:38:23.815676 6 log.go:172] (0xc003e74000) Data frame received for 3 I0402 21:38:23.815686 6 log.go:172] (0xc001e679a0) (3) Data frame handling I0402 21:38:23.815698 6 log.go:172] (0xc001e679a0) (3) Data frame sent I0402 21:38:23.815707 6 log.go:172] (0xc003e74000) Data frame received for 3 I0402 21:38:23.815715 6 log.go:172] (0xc001e679a0) (3) Data frame handling I0402 21:38:23.817027 6 log.go:172] (0xc003e74000) Data frame received for 1 I0402 21:38:23.817055 6 log.go:172] (0xc001e67860) (1) Data frame handling I0402 21:38:23.817084 6 log.go:172] (0xc001e67860) (1) Data frame sent I0402 21:38:23.817267 6 log.go:172] (0xc003e74000) (0xc001e67860) Stream removed, broadcasting: 1 I0402 21:38:23.817328 6 log.go:172] (0xc003e74000) Go away received I0402 21:38:23.817418 6 log.go:172] (0xc003e74000) (0xc001e67860) Stream removed, broadcasting: 1 I0402 21:38:23.817443 6 log.go:172] (0xc003e74000) (0xc001e679a0) Stream removed, broadcasting: 3 I0402 21:38:23.817462 6 log.go:172] (0xc003e74000) (0xc001e67cc0) Stream removed, broadcasting: 5 Apr 2 21:38:23.817: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:23.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8479" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":127,"skipped":2060,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:23.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3942 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3942 I0402 21:38:24.035443 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3942, replica count: 2 I0402 21:38:27.085874 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:38:30.086165 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 21:38:30.086: INFO: Creating new exec pod Apr 2 21:38:35.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3942 execpodgmsqz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 2 21:38:35.328: INFO: stderr: "I0402 21:38:35.253001 1440 log.go:172] (0xc00061a000) (0xc00080b9a0) Create stream\nI0402 21:38:35.253082 1440 log.go:172] (0xc00061a000) (0xc00080b9a0) Stream added, broadcasting: 1\nI0402 21:38:35.255141 1440 log.go:172] (0xc00061a000) Reply frame received for 1\nI0402 21:38:35.255183 1440 log.go:172] (0xc00061a000) (0xc0005c0000) Create stream\nI0402 21:38:35.255194 1440 log.go:172] (0xc00061a000) (0xc0005c0000) Stream added, broadcasting: 3\nI0402 21:38:35.256121 1440 log.go:172] (0xc00061a000) Reply frame received for 3\nI0402 21:38:35.256153 1440 log.go:172] (0xc00061a000) (0xc0005c0140) Create stream\nI0402 21:38:35.256169 1440 log.go:172] (0xc00061a000) (0xc0005c0140) Stream added, broadcasting: 5\nI0402 21:38:35.257023 1440 log.go:172] (0xc00061a000) Reply frame received for 5\nI0402 21:38:35.318226 1440 log.go:172] (0xc00061a000) Data frame received for 5\nI0402 21:38:35.318272 1440 log.go:172] (0xc0005c0140) (5) Data frame handling\nI0402 21:38:35.318307 1440 log.go:172] (0xc0005c0140) (5) Data frame sent\nI0402 21:38:35.318323 1440 log.go:172] (0xc00061a000) Data frame received for 5\nI0402 21:38:35.318338 1440 log.go:172] (0xc0005c0140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0402 21:38:35.318371 1440 log.go:172] (0xc0005c0140) (5) Data frame sent\nI0402 21:38:35.320150 1440 log.go:172] (0xc00061a000) Data frame received for 3\nI0402 21:38:35.320196 1440 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0402 21:38:35.320225 1440 log.go:172] (0xc00061a000) Data frame received for 5\nI0402 21:38:35.320251 1440 log.go:172] (0xc0005c0140) (5) Data frame handling\nI0402 21:38:35.323299 1440 log.go:172] (0xc00061a000) Data frame received for 1\nI0402 21:38:35.323319 1440 log.go:172] (0xc00080b9a0) (1) Data frame handling\nI0402 21:38:35.323334 1440 log.go:172] (0xc00080b9a0) (1) Data frame sent\nI0402 21:38:35.323355 1440 log.go:172] (0xc00061a000) (0xc00080b9a0) Stream removed, broadcasting: 1\nI0402 21:38:35.323372 1440 log.go:172] (0xc00061a000) Go away received\nI0402 21:38:35.323688 1440 log.go:172] (0xc00061a000) (0xc00080b9a0) Stream removed, broadcasting: 1\nI0402 21:38:35.323714 1440 log.go:172] (0xc00061a000) (0xc0005c0000) Stream removed, broadcasting: 3\nI0402 21:38:35.323730 1440 log.go:172] (0xc00061a000) (0xc0005c0140) Stream removed, broadcasting: 5\n" Apr 2 21:38:35.328: INFO: stdout: "" Apr 2 21:38:35.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3942 execpodgmsqz -- /bin/sh -x -c nc -zv -t -w 2 10.101.253.147 80' Apr 2 21:38:35.531: INFO: stderr: "I0402 21:38:35.452970 1461 log.go:172] (0xc000798790) (0xc000918140) Create stream\nI0402 21:38:35.453041 1461 log.go:172] (0xc000798790) (0xc000918140) Stream added, broadcasting: 1\nI0402 21:38:35.456039 1461 log.go:172] (0xc000798790) Reply frame received for 1\nI0402 21:38:35.456081 1461 log.go:172] (0xc000798790) (0xc0006799a0) Create stream\nI0402 21:38:35.456097 1461 log.go:172] (0xc000798790) (0xc0006799a0) Stream added, broadcasting: 3\nI0402 21:38:35.456818 1461 log.go:172] (0xc000798790) Reply frame received for 3\nI0402 21:38:35.456854 1461 log.go:172] (0xc000798790) (0xc0009181e0) Create stream\nI0402 21:38:35.456870 1461 log.go:172] (0xc000798790) (0xc0009181e0) Stream added, broadcasting: 5\nI0402 21:38:35.457870 1461 log.go:172] (0xc000798790) Reply frame received for 5\nI0402 21:38:35.523654 1461 log.go:172] (0xc000798790) Data frame received for 5\nI0402 21:38:35.523694 1461 log.go:172] (0xc0009181e0) (5) Data frame handling\nI0402 21:38:35.523707 1461 log.go:172] (0xc0009181e0) (5) Data frame sent\nI0402 21:38:35.523716 1461 log.go:172] (0xc000798790) Data frame received for 5\nI0402 21:38:35.523723 1461 log.go:172] (0xc0009181e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.253.147 80\nConnection to 10.101.253.147 80 port [tcp/http] succeeded!\nI0402 21:38:35.523747 1461 log.go:172] (0xc000798790) Data frame received for 3\nI0402 21:38:35.523756 1461 log.go:172] (0xc0006799a0) (3) Data frame handling\nI0402 21:38:35.525379 1461 log.go:172] (0xc000798790) Data frame received for 1\nI0402 21:38:35.525412 1461 log.go:172] (0xc000918140) (1) Data frame handling\nI0402 21:38:35.525434 1461 log.go:172] (0xc000918140) (1) Data frame sent\nI0402 21:38:35.525451 1461 log.go:172] (0xc000798790) (0xc000918140) Stream removed, broadcasting: 1\nI0402 21:38:35.525573 1461 log.go:172] (0xc000798790) Go away received\nI0402 21:38:35.525930 1461 log.go:172] (0xc000798790) (0xc000918140) Stream removed, broadcasting: 1\nI0402 21:38:35.525971 1461 log.go:172] (0xc000798790) (0xc0006799a0) Stream removed, broadcasting: 3\nI0402 21:38:35.526005 1461 log.go:172] (0xc000798790) (0xc0009181e0) Stream removed, broadcasting: 5\n" Apr 2 21:38:35.531: INFO: stdout: "" Apr 2 21:38:35.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3942 execpodgmsqz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32377' Apr 2 21:38:35.753: INFO: stderr: "I0402 21:38:35.680546 1480 log.go:172] (0xc0006fc9a0) (0xc0006281e0) Create stream\nI0402 21:38:35.680592 1480 log.go:172] (0xc0006fc9a0) (0xc0006281e0) Stream added, broadcasting: 1\nI0402 21:38:35.683074 1480 log.go:172] (0xc0006fc9a0) Reply frame received for 1\nI0402 21:38:35.683122 1480 log.go:172] (0xc0006fc9a0) (0xc00064fae0) Create stream\nI0402 21:38:35.683141 1480 log.go:172] (0xc0006fc9a0) (0xc00064fae0) Stream added, broadcasting: 3\nI0402 21:38:35.684132 1480 log.go:172] (0xc0006fc9a0) Reply frame received for 3\nI0402 21:38:35.684177 1480 log.go:172] (0xc0006fc9a0) (0xc00064fcc0) Create stream\nI0402 21:38:35.684191 1480 log.go:172] (0xc0006fc9a0) (0xc00064fcc0) Stream added, broadcasting: 5\nI0402 21:38:35.685422 1480 log.go:172] (0xc0006fc9a0) Reply frame received for 5\nI0402 21:38:35.744855 1480 log.go:172] (0xc0006fc9a0) Data frame received for 5\nI0402 21:38:35.744901 1480 log.go:172] (0xc00064fcc0) (5) Data frame handling\nI0402 21:38:35.744944 1480 log.go:172] (0xc00064fcc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32377\nConnection to 172.17.0.10 32377 port [tcp/32377] succeeded!\nI0402 21:38:35.745351 1480 log.go:172] (0xc0006fc9a0) Data frame received for 5\nI0402 21:38:35.745391 1480 log.go:172] (0xc00064fcc0) (5) Data frame handling\nI0402 21:38:35.745427 1480 log.go:172] (0xc0006fc9a0) Data frame received for 3\nI0402 21:38:35.745451 1480 log.go:172] (0xc00064fae0) (3) Data frame handling\nI0402 21:38:35.747371 1480 log.go:172] (0xc0006fc9a0) Data frame received for 1\nI0402 21:38:35.747486 1480 log.go:172] (0xc0006281e0) (1) Data frame handling\nI0402 21:38:35.747526 1480 log.go:172] (0xc0006281e0) (1) Data frame sent\nI0402 21:38:35.747562 1480 log.go:172] (0xc0006fc9a0) (0xc0006281e0) Stream removed, broadcasting: 1\nI0402 21:38:35.747594 1480 log.go:172] (0xc0006fc9a0) Go away received\nI0402 21:38:35.748039 1480 log.go:172] (0xc0006fc9a0) (0xc0006281e0) Stream removed, broadcasting: 1\nI0402 21:38:35.748076 1480 log.go:172] (0xc0006fc9a0) (0xc00064fae0) Stream removed, broadcasting: 3\nI0402 21:38:35.748090 1480 log.go:172] (0xc0006fc9a0) (0xc00064fcc0) Stream removed, broadcasting: 5\n" Apr 2 21:38:35.753: INFO: stdout: "" Apr 2 21:38:35.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3942 execpodgmsqz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32377' Apr 2 21:38:35.931: INFO: stderr: "I0402 21:38:35.867304 1503 log.go:172] (0xc0009f2d10) (0xc000b843c0) Create stream\nI0402 21:38:35.867351 1503 log.go:172] (0xc0009f2d10) (0xc000b843c0) Stream added, broadcasting: 1\nI0402 21:38:35.869661 1503 log.go:172] (0xc0009f2d10) Reply frame received for 1\nI0402 21:38:35.869714 1503 log.go:172] (0xc0009f2d10) (0xc000b46000) Create stream\nI0402 21:38:35.869731 1503 log.go:172] (0xc0009f2d10) (0xc000b46000) Stream added, broadcasting: 3\nI0402 21:38:35.870601 1503 log.go:172] (0xc0009f2d10) Reply frame received for 3\nI0402 21:38:35.870638 1503 log.go:172] (0xc0009f2d10) (0xc000b84460) Create stream\nI0402 21:38:35.870648 1503 log.go:172] (0xc0009f2d10) (0xc000b84460) Stream added, broadcasting: 5\nI0402 21:38:35.871484 1503 log.go:172] (0xc0009f2d10) Reply frame received for 5\nI0402 21:38:35.922838 1503 log.go:172] (0xc0009f2d10) Data frame received for 5\nI0402 21:38:35.922869 1503 log.go:172] (0xc000b84460) (5) Data frame handling\nI0402 21:38:35.922890 1503 log.go:172] (0xc000b84460) (5) Data frame sent\nI0402 21:38:35.922918 1503 log.go:172] (0xc0009f2d10) Data frame received for 5\nI0402 21:38:35.922941 1503 log.go:172] (0xc000b84460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32377\nConnection to 172.17.0.8 32377 port [tcp/32377] succeeded!\nI0402 21:38:35.922971 1503 log.go:172] (0xc000b84460) (5) Data frame sent\nI0402 21:38:35.922986 1503 log.go:172] (0xc0009f2d10) Data frame received for 5\nI0402 21:38:35.923007 1503 log.go:172] (0xc000b84460) (5) Data frame handling\nI0402 21:38:35.923553 1503 log.go:172] (0xc0009f2d10) Data frame received for 3\nI0402 21:38:35.923586 1503 log.go:172] (0xc000b46000) (3) Data frame handling\nI0402 21:38:35.925627 1503 log.go:172] (0xc0009f2d10) Data frame received for 1\nI0402 21:38:35.925656 1503 log.go:172] (0xc000b843c0) (1) Data frame handling\nI0402 21:38:35.925707 1503 log.go:172] (0xc000b843c0) (1) Data frame sent\nI0402 21:38:35.926003 1503 log.go:172] (0xc0009f2d10) (0xc000b843c0) Stream removed, broadcasting: 1\nI0402 21:38:35.926072 1503 log.go:172] (0xc0009f2d10) Go away received\nI0402 21:38:35.926484 1503 log.go:172] (0xc0009f2d10) (0xc000b843c0) Stream removed, broadcasting: 1\nI0402 21:38:35.926511 1503 log.go:172] (0xc0009f2d10) (0xc000b46000) Stream removed, broadcasting: 3\nI0402 21:38:35.926525 1503 log.go:172] (0xc0009f2d10) (0xc000b84460) Stream removed, broadcasting: 5\n" Apr 2 21:38:35.931: INFO: stdout: "" Apr 2 21:38:35.931: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:35.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3942" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.177 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":128,"skipped":2070,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:36.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 2 21:38:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 2 21:38:36.164: INFO: stderr: "" Apr 2 21:38:36.164: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:36.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7635" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":129,"skipped":2073,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:36.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5c23ea03-26a5-4168-bd80-2b5b8a0bd213 STEP: Creating a pod to test consume secrets Apr 2 21:38:36.232: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906" in namespace "projected-6813" to be "success or failure" Apr 2 21:38:36.236: INFO: Pod "pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906": Phase="Pending", Reason="", readiness=false. Elapsed: 3.994534ms Apr 2 21:38:38.240: INFO: Pod "pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008275659s Apr 2 21:38:40.245: INFO: Pod "pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013125299s STEP: Saw pod success Apr 2 21:38:40.245: INFO: Pod "pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906" satisfied condition "success or failure" Apr 2 21:38:40.248: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906 container projected-secret-volume-test: STEP: delete the pod Apr 2 21:38:40.291: INFO: Waiting for pod pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906 to disappear Apr 2 21:38:40.305: INFO: Pod pod-projected-secrets-bcddd78f-84dd-4b2a-8058-19e721781906 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:40.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6813" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2087,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:40.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 2 21:38:40.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9717 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 2 21:38:43.445: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0402 21:38:43.374206 1545 log.go:172] (0xc000976790) (0xc00073a140) Create stream\nI0402 21:38:43.374255 1545 log.go:172] (0xc000976790) (0xc00073a140) Stream added, broadcasting: 1\nI0402 21:38:43.376200 1545 log.go:172] (0xc000976790) Reply frame received for 1\nI0402 21:38:43.376240 1545 log.go:172] (0xc000976790) (0xc000754000) Create stream\nI0402 21:38:43.376254 1545 log.go:172] (0xc000976790) (0xc000754000) Stream added, broadcasting: 3\nI0402 21:38:43.377344 1545 log.go:172] (0xc000976790) Reply frame received for 3\nI0402 21:38:43.377385 1545 log.go:172] (0xc000976790) (0xc000635a40) Create stream\nI0402 21:38:43.377396 1545 log.go:172] (0xc000976790) (0xc000635a40) Stream added, broadcasting: 5\nI0402 21:38:43.378194 1545 log.go:172] (0xc000976790) Reply frame received for 5\nI0402 21:38:43.378217 1545 log.go:172] (0xc000976790) (0xc000635ae0) Create stream\nI0402 21:38:43.378227 1545 log.go:172] (0xc000976790) (0xc000635ae0) Stream added, broadcasting: 7\nI0402 21:38:43.378974 1545 log.go:172] (0xc000976790) Reply frame received for 7\nI0402 21:38:43.379084 1545 log.go:172] (0xc000754000) (3) Writing data frame\nI0402 21:38:43.379198 1545 log.go:172] (0xc000754000) (3) Writing data frame\nI0402 21:38:43.380091 1545 log.go:172] (0xc000976790) Data frame received for 5\nI0402 21:38:43.380113 1545 log.go:172] (0xc000635a40) (5) Data frame handling\nI0402 21:38:43.380127 1545 log.go:172] (0xc000635a40) (5) Data frame sent\nI0402 21:38:43.380593 1545 log.go:172] (0xc000976790) Data frame received for 5\nI0402 21:38:43.380609 1545 log.go:172] (0xc000635a40) (5) Data frame handling\nI0402 21:38:43.380622 1545 log.go:172] (0xc000635a40) (5) Data frame sent\nI0402 21:38:43.422312 1545 log.go:172] (0xc000976790) Data frame received for 5\nI0402 21:38:43.422349 1545 log.go:172] (0xc000635a40) (5) Data frame handling\nI0402 21:38:43.422366 1545 log.go:172] (0xc000976790) Data frame received for 7\nI0402 21:38:43.422371 1545 log.go:172] (0xc000635ae0) (7) Data frame handling\nI0402 21:38:43.422606 1545 log.go:172] (0xc000976790) Data frame received for 1\nI0402 21:38:43.422643 1545 log.go:172] (0xc00073a140) (1) Data frame handling\nI0402 21:38:43.422664 1545 log.go:172] (0xc00073a140) (1) Data frame sent\nI0402 21:38:43.422683 1545 log.go:172] (0xc000976790) (0xc00073a140) Stream removed, broadcasting: 1\nI0402 21:38:43.422756 1545 log.go:172] (0xc000976790) (0xc000754000) Stream removed, broadcasting: 3\nI0402 21:38:43.422794 1545 log.go:172] (0xc000976790) Go away received\nI0402 21:38:43.423058 1545 log.go:172] (0xc000976790) (0xc00073a140) Stream removed, broadcasting: 1\nI0402 21:38:43.423080 1545 log.go:172] (0xc000976790) (0xc000754000) Stream removed, broadcasting: 3\nI0402 21:38:43.423092 1545 log.go:172] (0xc000976790) (0xc000635a40) Stream removed, broadcasting: 5\nI0402 21:38:43.423103 1545 log.go:172] (0xc000976790) (0xc000635ae0) Stream removed, broadcasting: 7\n" Apr 2 21:38:43.445: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:38:45.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9717" for this suite. • [SLOW TEST:5.155 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":131,"skipped":2098,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:38:45.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:39:01.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1133" for this suite. • [SLOW TEST:16.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":132,"skipped":2138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:39:01.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 2 21:39:02.282: INFO: Pod name wrapped-volume-race-d92e0c0a-4ff6-4044-a23e-a265b62147dc: Found 0 pods out of 5 Apr 2 21:39:07.289: INFO: Pod name wrapped-volume-race-d92e0c0a-4ff6-4044-a23e-a265b62147dc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d92e0c0a-4ff6-4044-a23e-a265b62147dc in namespace emptydir-wrapper-7352, will wait for the garbage collector to delete the pods Apr 2 21:39:19.371: INFO: Deleting ReplicationController wrapped-volume-race-d92e0c0a-4ff6-4044-a23e-a265b62147dc took: 7.295355ms Apr 2 21:39:19.671: INFO: Terminating ReplicationController wrapped-volume-race-d92e0c0a-4ff6-4044-a23e-a265b62147dc pods took: 300.257666ms STEP: Creating RC which spawns configmap-volume pods Apr 2 21:39:29.899: INFO: Pod name wrapped-volume-race-ce44c23b-ebd7-407d-aff0-67eb9fa57749: Found 0 pods out of 5 Apr 2 21:39:34.906: INFO: Pod name wrapped-volume-race-ce44c23b-ebd7-407d-aff0-67eb9fa57749: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ce44c23b-ebd7-407d-aff0-67eb9fa57749 in namespace emptydir-wrapper-7352, will wait for the garbage collector to delete the pods Apr 2 21:39:49.010: INFO: Deleting ReplicationController wrapped-volume-race-ce44c23b-ebd7-407d-aff0-67eb9fa57749 took: 22.594955ms Apr 2 21:39:49.310: INFO: Terminating ReplicationController wrapped-volume-race-ce44c23b-ebd7-407d-aff0-67eb9fa57749 pods took: 300.234782ms STEP: Creating RC which spawns configmap-volume pods Apr 2 21:39:59.642: INFO: Pod name wrapped-volume-race-a1d59a58-11af-46ad-9c8c-0c3a1c4e3fdd: Found 0 pods out of 5 Apr 2 21:40:04.668: INFO: Pod name wrapped-volume-race-a1d59a58-11af-46ad-9c8c-0c3a1c4e3fdd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a1d59a58-11af-46ad-9c8c-0c3a1c4e3fdd in namespace emptydir-wrapper-7352, will wait for the garbage collector to delete the pods Apr 2 21:40:18.753: INFO: Deleting ReplicationController wrapped-volume-race-a1d59a58-11af-46ad-9c8c-0c3a1c4e3fdd took: 7.285125ms Apr 2 21:40:19.053: INFO: Terminating ReplicationController wrapped-volume-race-a1d59a58-11af-46ad-9c8c-0c3a1c4e3fdd pods took: 300.184776ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:40:30.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7352" for this suite. • [SLOW TEST:89.206 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":133,"skipped":2183,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:40:30.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7809 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7809 Apr 2 21:40:30.913: INFO: Found 0 stateful pods, waiting for 1 Apr 2 21:40:40.918: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 21:40:40.937: INFO: Deleting all statefulset in ns statefulset-7809 Apr 2 21:40:40.957: INFO: Scaling statefulset ss to 0 Apr 2 21:41:01.023: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:41:01.027: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:01.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7809" for this suite. • [SLOW TEST:30.213 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":134,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:01.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-063a5f5b-8241-49ff-86fd-c5a349bfd818 STEP: Creating a pod to test consume configMaps Apr 2 21:41:01.105: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3" in namespace "configmap-3833" to be "success or failure" Apr 2 21:41:01.133: INFO: Pod "pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.438948ms Apr 2 21:41:03.138: INFO: Pod "pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032692704s Apr 2 21:41:05.142: INFO: Pod "pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036900564s STEP: Saw pod success Apr 2 21:41:05.142: INFO: Pod "pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3" satisfied condition "success or failure" Apr 2 21:41:05.145: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3 container configmap-volume-test: STEP: delete the pod Apr 2 21:41:05.249: INFO: Waiting for pod pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3 to disappear Apr 2 21:41:05.255: INFO: Pod pod-configmaps-f1c19e6f-13d4-4a82-96e6-c96c23f21cc3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:05.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3833" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2218,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:05.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:41:05.303: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:09.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6202" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2224,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:09.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:41:09.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9313' Apr 2 21:41:12.399: INFO: stderr: "" Apr 2 21:41:12.399: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 2 21:41:12.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9313' Apr 2 21:41:12.691: INFO: stderr: "" Apr 2 21:41:12.691: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 21:41:13.695: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:41:13.695: INFO: Found 0 / 1 Apr 2 21:41:14.732: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:41:14.732: INFO: Found 0 / 1 Apr 2 21:41:15.695: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:41:15.695: INFO: Found 1 / 1 Apr 2 21:41:15.695: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 2 21:41:15.698: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 21:41:15.698: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 21:41:15.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-m2vvt --namespace=kubectl-9313' Apr 2 21:41:15.818: INFO: stderr: "" Apr 2 21:41:15.818: INFO: stdout: "Name: agnhost-master-m2vvt\nNamespace: kubectl-9313\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Thu, 02 Apr 2020 21:41:12 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.219\nIPs:\n IP: 10.244.2.219\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://6a625a231eea370a98500526e4d404bc957a25c44d854c03730252481004cdaa\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 02 Apr 2020 21:41:14 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-jq5kk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-jq5kk:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-jq5kk\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9313/agnhost-master-m2vvt to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 2 21:41:15.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9313' Apr 2 21:41:15.957: INFO: stderr: "" Apr 2 21:41:15.957: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9313\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-m2vvt\n" Apr 2 21:41:15.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9313' Apr 2 21:41:16.071: INFO: stderr: "" Apr 2 21:41:16.071: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9313\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.104.71.63\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.219:6379\nSession Affinity: None\nEvents: \n" Apr 2 21:41:16.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 2 21:41:16.202: INFO: stderr: "" Apr 2 21:41:16.202: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 02 Apr 2020 21:41:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 02 Apr 2020 21:37:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 02 Apr 2020 21:37:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 02 Apr 2020 21:37:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 02 Apr 2020 21:37:58 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 18d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 2 21:41:16.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9313' Apr 2 21:41:16.310: INFO: stderr: "" Apr 2 21:41:16.310: INFO: stdout: "Name: kubectl-9313\nLabels: e2e-framework=kubectl\n e2e-run=cb535d20-9b2e-4664-870d-63279a155206\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:16.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9313" for this suite. • [SLOW TEST:6.853 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1154 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":137,"skipped":2227,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:16.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:20.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2276" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2249,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:20.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 2 21:41:20.481: INFO: >>> kubeConfig: /root/.kube/config Apr 2 21:41:23.381: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1404" for this suite. • [SLOW TEST:13.335 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":139,"skipped":2250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:33.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[] Apr 2 21:41:33.924: INFO: Get endpoints failed (11.767516ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 2 21:41:34.928: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[] (1.015750948s elapsed) STEP: Creating pod pod1 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod1:[100]] Apr 2 21:41:38.016: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod1:[100]] (3.081035061s elapsed) STEP: Creating pod pod2 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod1:[100] pod2:[101]] Apr 2 21:41:41.100: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod1:[100] pod2:[101]] (3.080278485s elapsed) STEP: Deleting pod pod1 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod2:[101]] Apr 2 21:41:42.171: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod2:[101]] (1.06523716s elapsed) STEP: Deleting pod pod2 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[] Apr 2 21:41:43.185: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[] (1.009236806s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:43.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8047" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.471 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":140,"skipped":2283,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:41:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4299" for this suite. • [SLOW TEST:16.361 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":141,"skipped":2298,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:41:59.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 2 21:41:59.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458' Apr 2 21:41:59.976: INFO: stderr: "" Apr 2 21:41:59.976: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 21:41:59.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 2 21:42:00.079: INFO: stderr: "" Apr 2 21:42:00.079: INFO: stdout: "update-demo-nautilus-8q9x7 update-demo-nautilus-b5vxg " Apr 2 21:42:00.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q9x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:00.168: INFO: stderr: "" Apr 2 21:42:00.168: INFO: stdout: "" Apr 2 21:42:00.168: INFO: update-demo-nautilus-8q9x7 is created but not running Apr 2 21:42:05.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 2 21:42:05.270: INFO: stderr: "" Apr 2 21:42:05.270: INFO: stdout: "update-demo-nautilus-8q9x7 update-demo-nautilus-b5vxg " Apr 2 21:42:05.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q9x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:05.365: INFO: stderr: "" Apr 2 21:42:05.365: INFO: stdout: "true" Apr 2 21:42:05.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q9x7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:05.456: INFO: stderr: "" Apr 2 21:42:05.456: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 21:42:05.456: INFO: validating pod update-demo-nautilus-8q9x7 Apr 2 21:42:05.460: INFO: got data: { "image": "nautilus.jpg" } Apr 2 21:42:05.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 21:42:05.460: INFO: update-demo-nautilus-8q9x7 is verified up and running Apr 2 21:42:05.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vxg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:05.559: INFO: stderr: "" Apr 2 21:42:05.559: INFO: stdout: "true" Apr 2 21:42:05.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b5vxg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:05.652: INFO: stderr: "" Apr 2 21:42:05.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 21:42:05.652: INFO: validating pod update-demo-nautilus-b5vxg Apr 2 21:42:05.656: INFO: got data: { "image": "nautilus.jpg" } Apr 2 21:42:05.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 21:42:05.656: INFO: update-demo-nautilus-b5vxg is verified up and running STEP: rolling-update to new replication controller Apr 2 21:42:05.660: INFO: scanned /root for discovery docs: Apr 2 21:42:05.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6458' Apr 2 21:42:28.195: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 2 21:42:28.195: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 21:42:28.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 2 21:42:28.319: INFO: stderr: "" Apr 2 21:42:28.320: INFO: stdout: "update-demo-kitten-hqscb update-demo-kitten-j2vk2 update-demo-nautilus-b5vxg " STEP: Replicas for name=update-demo: expected=2 actual=3 Apr 2 21:42:33.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 2 21:42:33.428: INFO: stderr: "" Apr 2 21:42:33.428: INFO: stdout: "update-demo-kitten-hqscb update-demo-kitten-j2vk2 " Apr 2 21:42:33.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hqscb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:33.530: INFO: stderr: "" Apr 2 21:42:33.530: INFO: stdout: "true" Apr 2 21:42:33.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hqscb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:33.616: INFO: stderr: "" Apr 2 21:42:33.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 2 21:42:33.616: INFO: validating pod update-demo-kitten-hqscb Apr 2 21:42:33.620: INFO: got data: { "image": "kitten.jpg" } Apr 2 21:42:33.620: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 2 21:42:33.620: INFO: update-demo-kitten-hqscb is verified up and running Apr 2 21:42:33.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j2vk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:33.724: INFO: stderr: "" Apr 2 21:42:33.724: INFO: stdout: "true" Apr 2 21:42:33.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j2vk2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 2 21:42:33.825: INFO: stderr: "" Apr 2 21:42:33.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 2 21:42:33.826: INFO: validating pod update-demo-kitten-j2vk2 Apr 2 21:42:33.830: INFO: got data: { "image": "kitten.jpg" } Apr 2 21:42:33.830: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 2 21:42:33.830: INFO: update-demo-kitten-j2vk2 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:42:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6458" for this suite. • [SLOW TEST:34.228 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":142,"skipped":2320,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:42:33.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3439 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3439 STEP: Creating statefulset with conflicting port in namespace statefulset-3439 STEP: Waiting until pod test-pod will start running in namespace statefulset-3439 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3439 Apr 2 21:42:38.009: INFO: Observed stateful pod in namespace: statefulset-3439, name: ss-0, uid: 7af2e846-c165-473c-aaea-2c1a0b7b5fb7, status phase: Pending. Waiting for statefulset controller to delete. Apr 2 21:42:38.533: INFO: Observed stateful pod in namespace: statefulset-3439, name: ss-0, uid: 7af2e846-c165-473c-aaea-2c1a0b7b5fb7, status phase: Failed. Waiting for statefulset controller to delete. Apr 2 21:42:38.580: INFO: Observed stateful pod in namespace: statefulset-3439, name: ss-0, uid: 7af2e846-c165-473c-aaea-2c1a0b7b5fb7, status phase: Failed. Waiting for statefulset controller to delete. Apr 2 21:42:38.588: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3439 STEP: Removing pod with conflicting port in namespace statefulset-3439 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3439 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 21:42:42.643: INFO: Deleting all statefulset in ns statefulset-3439 Apr 2 21:42:42.646: INFO: Scaling statefulset ss to 0 Apr 2 21:42:52.661: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:42:52.664: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:42:52.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3439" for this suite. • [SLOW TEST:18.846 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":143,"skipped":2330,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:42:52.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 2 21:42:52.731: INFO: Waiting up to 5m0s for pod "client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb" in namespace "containers-8265" to be "success or failure" Apr 2 21:42:52.735: INFO: Pod "client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.908252ms Apr 2 21:42:54.739: INFO: Pod "client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007900774s Apr 2 21:42:56.743: INFO: Pod "client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011766536s STEP: Saw pod success Apr 2 21:42:56.743: INFO: Pod "client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb" satisfied condition "success or failure" Apr 2 21:42:56.746: INFO: Trying to get logs from node jerma-worker pod client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb container test-container: STEP: delete the pod Apr 2 21:42:56.791: INFO: Waiting for pod client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb to disappear Apr 2 21:42:56.837: INFO: Pod client-containers-2dedbf9c-5d0d-4bcb-b1da-deea4d39b3bb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:42:56.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8265" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2334,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:42:56.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9d1449d1-1421-4d28-931e-e9ce6eb59c9f STEP: Creating a pod to test consume secrets Apr 2 21:42:56.948: INFO: Waiting up to 5m0s for pod "pod-secrets-840da355-f41d-4308-80ef-80146a4226ee" in namespace "secrets-900" to be "success or failure" Apr 2 21:42:56.952: INFO: Pod "pod-secrets-840da355-f41d-4308-80ef-80146a4226ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.891875ms Apr 2 21:42:58.977: INFO: Pod "pod-secrets-840da355-f41d-4308-80ef-80146a4226ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028626926s Apr 2 21:43:00.981: INFO: Pod "pod-secrets-840da355-f41d-4308-80ef-80146a4226ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032601894s STEP: Saw pod success Apr 2 21:43:00.981: INFO: Pod "pod-secrets-840da355-f41d-4308-80ef-80146a4226ee" satisfied condition "success or failure" Apr 2 21:43:00.984: INFO: Trying to get logs from node jerma-worker pod pod-secrets-840da355-f41d-4308-80ef-80146a4226ee container secret-volume-test: STEP: delete the pod Apr 2 21:43:01.019: INFO: Waiting for pod pod-secrets-840da355-f41d-4308-80ef-80146a4226ee to disappear Apr 2 21:43:01.024: INFO: Pod pod-secrets-840da355-f41d-4308-80ef-80146a4226ee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:01.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-900" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:01.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:43:01.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f" in namespace "downward-api-2008" to be "success or failure" Apr 2 21:43:01.102: INFO: Pod "downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731445ms Apr 2 21:43:03.117: INFO: Pod "downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018785423s Apr 2 21:43:05.121: INFO: Pod "downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022641891s STEP: Saw pod success Apr 2 21:43:05.121: INFO: Pod "downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f" satisfied condition "success or failure" Apr 2 21:43:05.124: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f container client-container: STEP: delete the pod Apr 2 21:43:05.139: INFO: Waiting for pod downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f to disappear Apr 2 21:43:05.144: INFO: Pod downwardapi-volume-75543bd6-c556-4ba2-bc4d-40814457842f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:05.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2008" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2366,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:05.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:43:05.855: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:43:07.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460585, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460585, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460585, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460585, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:43:10.891: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:10.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4675" for this suite. STEP: Destroying namespace "webhook-4675-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.891 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":147,"skipped":2373,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:11.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:43:11.140: INFO: Creating deployment "webserver-deployment" Apr 2 21:43:11.149: INFO: Waiting for observed generation 1 Apr 2 21:43:13.267: INFO: Waiting for all required pods to come up Apr 2 21:43:13.270: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 2 21:43:21.675: INFO: Waiting for deployment "webserver-deployment" to complete Apr 2 21:43:21.680: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 2 21:43:21.687: INFO: Updating deployment webserver-deployment Apr 2 21:43:21.687: INFO: Waiting for observed generation 2 Apr 2 21:43:23.699: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 2 21:43:23.702: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 2 21:43:23.705: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 2 21:43:23.712: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 2 21:43:23.712: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 2 21:43:23.714: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 2 21:43:23.718: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 2 21:43:23.718: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 2 21:43:23.723: INFO: Updating deployment webserver-deployment Apr 2 21:43:23.723: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 2 21:43:23.757: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 2 21:43:23.774: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 2 21:43:23.950: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7420 /apis/apps/v1/namespaces/deployment-7420/deployments/webserver-deployment 68e7e2c1-783b-4ec8-aa6c-3ddca8a3b5cf 4856897 3 2020-04-02 21:43:11 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00506b498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-02 21:43:22 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-02 21:43:23 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 2 21:43:24.043: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7420 /apis/apps/v1/namespaces/deployment-7420/replicasets/webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 4856941 3 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 68e7e2c1-783b-4ec8-aa6c-3ddca8a3b5cf 0xc00506b967 0xc00506b968}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00506b9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:43:24.043: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 2 21:43:24.044: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7420 /apis/apps/v1/namespaces/deployment-7420/replicasets/webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 4856937 3 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 68e7e2c1-783b-4ec8-aa6c-3ddca8a3b5cf 0xc00506b8a7 0xc00506b8a8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00506b908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:43:24.100: INFO: Pod "webserver-deployment-595b5b9587-5ckmv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ckmv webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-5ckmv d504cf1e-b484-4f59-b1b2-f53e64beca1a 4856760 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc00506bea7 0xc00506bea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.227,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ff9beed514d098a97fdad4938230d54c863cd2aad0eb0d8565940c696a4212d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.100: INFO: Pod "webserver-deployment-595b5b9587-6lqjm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6lqjm webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-6lqjm c2ca907a-16bf-4ee3-bd56-e59b9ab9ccd6 4856788 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe037 0xc004fbe038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.228,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://858134a795776e476a95b92f29c55691d320c2dc716a28f83172be261377a910,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.100: INFO: Pod "webserver-deployment-595b5b9587-9xsc4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9xsc4 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-9xsc4 fcef517b-d439-459f-bf59-998cab485841 4856921 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe1b7 0xc004fbe1b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.101: INFO: Pod "webserver-deployment-595b5b9587-c6vsn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6vsn webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-c6vsn f74061ac-070a-4f7d-b2be-3a7a95d0c515 4856935 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe307 0xc004fbe308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.101: INFO: Pod "webserver-deployment-595b5b9587-cstpn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cstpn webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-cstpn 2281dda0-b936-48b3-9155-30accb395854 4856750 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe427 0xc004fbe428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.226,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://65a3393960e57c095490e34409948c058823e3718a1d7eee1a1c59342daadc2d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.101: INFO: Pod "webserver-deployment-595b5b9587-dpdts" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dpdts webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-dpdts 76a14274-beff-4f59-aee1-e411eca42454 4856922 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe5a7 0xc004fbe5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.102: INFO: Pod "webserver-deployment-595b5b9587-f52nz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f52nz webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-f52nz 44e27b74-0157-4e03-8bd0-8fbbae17d3c9 4856743 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe6d7 0xc004fbe6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.173,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa45b57437ed37bc0c5b04c5057ab3ac467017b12011dc9ac0d9b17284321405,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.102: INFO: Pod "webserver-deployment-595b5b9587-f7ndn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f7ndn webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-f7ndn 105f595f-320b-423f-b4a7-81c5463fd69c 4856898 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe857 0xc004fbe858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.102: INFO: Pod "webserver-deployment-595b5b9587-hhw2m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hhw2m webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-hhw2m 9ccd1cfa-5d42-4e76-89a2-0f57f5461a78 4856934 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbe977 0xc004fbe978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.102: INFO: Pod "webserver-deployment-595b5b9587-kjd2t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kjd2t webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-kjd2t 5e4cc370-f6c9-4fe5-b9f4-776def3e1e35 4856945 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbea97 0xc004fbea98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-02 21:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.103: INFO: Pod "webserver-deployment-595b5b9587-lzhjn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lzhjn webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-lzhjn 1d3010a8-9a0b-44c8-a69a-3552d0f8d6ae 4856779 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbec17 0xc004fbec18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.174,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://99fe09b4d2c011dd0764d6df54b2c98ceb711457a37bfc87e993c90412ea9b66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.103: INFO: Pod "webserver-deployment-595b5b9587-mvp89" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mvp89 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-mvp89 07bb0428-d1c7-476f-84ab-dd82750a6109 4856938 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbeda7 0xc004fbeda8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-02 21:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.103: INFO: Pod "webserver-deployment-595b5b9587-qg6gq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qg6gq webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-qg6gq 260f247c-a1e4-4846-ab68-8842ab87fc57 4856929 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbef07 0xc004fbef08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.103: INFO: Pod "webserver-deployment-595b5b9587-r9wq5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r9wq5 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-r9wq5 c97eee78-0319-4fcf-bc00-e4a3b17674c3 4856920 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf027 0xc004fbf028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.103: INFO: Pod "webserver-deployment-595b5b9587-rjsmg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rjsmg webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-rjsmg 3140cf91-aae8-4038-9fff-c02261037ee0 4856914 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf157 0xc004fbf158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.104: INFO: Pod "webserver-deployment-595b5b9587-tv9d8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tv9d8 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-tv9d8 b0392122-8eed-4b64-b7cf-3102ab4a22c3 4856806 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf287 0xc004fbf288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.176,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b43e6416b53832637b5083090a9f6ebae436dd8cfdd4e50e5998e838e53da7dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.104: INFO: Pod "webserver-deployment-595b5b9587-vdqz6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vdqz6 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-vdqz6 e1a46fef-5c23-4c72-bb71-38d20f5208c6 4856931 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf417 0xc004fbf418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.104: INFO: Pod "webserver-deployment-595b5b9587-wl9r6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wl9r6 webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-wl9r6 13352680-47a6-40aa-a0b3-a2b3b6f1bdef 4856812 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf537 0xc004fbf538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.177,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c2ad2b751c7f7616614d8b48c6930a19312a8d8301e90bf01ca0789ae9e3ddc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.104: INFO: Pod "webserver-deployment-595b5b9587-zq7rw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zq7rw webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-zq7rw 51004f28-739a-44d6-a6f9-c4c62230479b 4856809 0 2020-04-02 21:43:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf6b7 0xc004fbf6b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.175,StartTime:2020-04-02 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:43:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f63b724fa01b93566300dfd44c7270ed4f20845631d8bd8156526a807fdcdc3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.105: INFO: Pod "webserver-deployment-595b5b9587-zs8fq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zs8fq webserver-deployment-595b5b9587- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-595b5b9587-zs8fq 4d15498c-83db-4196-be80-6e3559c988f7 4856936 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7271e5a0-a337-47bd-bf99-386404ebe7b5 0xc004fbf837 0xc004fbf838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.105: INFO: Pod "webserver-deployment-c7997dcc8-5dnjk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dnjk webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-5dnjk 2d5d4acc-35ec-46de-8eee-bf4c06b951c3 4856862 0 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004fbf957 0xc004fbf958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-02 21:43:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.105: INFO: Pod "webserver-deployment-c7997dcc8-5h8gq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5h8gq webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-5h8gq 7b018d05-d557-41dc-8ef6-1834b0ab0ff9 4856875 0 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004fbfae7 0xc004fbfae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-02 21:43:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.106: INFO: Pod "webserver-deployment-c7997dcc8-6h4zn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6h4zn webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-6h4zn 92affff9-7b36-423f-a4ad-f46ecde9b892 4856878 0 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004fbfc67 0xc004fbfc68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-02 21:43:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.106: INFO: Pod "webserver-deployment-c7997dcc8-7nj9k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7nj9k webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-7nj9k a8eacc8a-9082-43fe-b795-12425efc1a6a 4856846 0 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004fbfde7 0xc004fbfde8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-02 21:43:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.106: INFO: Pod "webserver-deployment-c7997dcc8-9l5tm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9l5tm webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-9l5tm 1d7bf3b2-cd06-48bc-8368-3b3a579b0c87 4856873 0 2020-04-02 21:43:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004fbff67 0xc004fbff68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-02 21:43:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.106: INFO: Pod "webserver-deployment-c7997dcc8-c9wdk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c9wdk webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-c9wdk 0e9cea0c-bf4b-41a8-930b-5d8c7dae0986 4856927 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa0e7 0xc004faa0e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-gd82q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gd82q webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-gd82q 216c06f8-2eb7-4f98-8c93-afb6c6d6b114 4856940 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa217 0xc004faa218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-hvgnp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hvgnp webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-hvgnp 14327a79-afbd-4ae8-ba09-47e49b10f186 4856928 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa347 0xc004faa348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-jwsl4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jwsl4 webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-jwsl4 dcc37fb3-524f-4fc8-9865-b41e3a19f0cb 4856923 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa497 0xc004faa498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-k5v8b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k5v8b webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-k5v8b 36f611a9-db94-448b-bd2c-e27882828160 4856926 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa5c7 0xc004faa5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-m8xx4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m8xx4 webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-m8xx4 0b4a3136-9b36-421f-a087-ed3cb846a50e 4856904 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa6f7 0xc004faa6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-q967x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q967x webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-q967x 428b0ae6-14ee-4a93-9ec3-5af6ca876c2a 4856911 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa827 0xc004faa828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 21:43:24.107: INFO: Pod "webserver-deployment-c7997dcc8-tjjvl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tjjvl webserver-deployment-c7997dcc8- deployment-7420 /api/v1/namespaces/deployment-7420/pods/webserver-deployment-c7997dcc8-tjjvl d4c545c0-94a4-4363-a87b-445955b0f4e5 4856925 0 2020-04-02 21:43:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ee4ad8ba-01d5-4016-9966-c50459d99ad9 0xc004faa957 0xc004faa958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wg4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wg4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:24.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7420" for this suite. • [SLOW TEST:13.229 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":148,"skipped":2388,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:24.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:43:25.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:43:27.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:29.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:31.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:33.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:35.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:37.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:43:39.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460605, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:43:42.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:43:42.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1431-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:43.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1772" for this suite. STEP: Destroying namespace "webhook-1772-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.394 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":149,"skipped":2404,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:43.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:43:43.815: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741" in namespace "downward-api-8722" to be "success or failure" Apr 2 21:43:43.819: INFO: Pod "downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31533ms Apr 2 21:43:45.823: INFO: Pod "downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007992966s Apr 2 21:43:47.827: INFO: Pod "downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012275064s STEP: Saw pod success Apr 2 21:43:47.827: INFO: Pod "downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741" satisfied condition "success or failure" Apr 2 21:43:47.830: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741 container client-container: STEP: delete the pod Apr 2 21:43:47.886: INFO: Waiting for pod downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741 to disappear Apr 2 21:43:47.897: INFO: Pod downwardapi-volume-09bc5a1f-5ee8-43cc-b339-24e9d3b7d741 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:47.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8722" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2417,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:47.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 2 21:43:47.940: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 21:43:47.971: INFO: Waiting for terminating namespaces to be deleted... Apr 2 21:43:47.974: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 2 21:43:47.979: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.979: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:43:47.979: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.979: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:43:47.979: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 2 21:43:47.984: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.984: INFO: Container kube-bench ready: false, restart count 0 Apr 2 21:43:47.984: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.984: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:43:47.984: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.984: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:43:47.984: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 2 21:43:47.984: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b8c16135-fa43-4f62-a1c2-78bb416682a5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b8c16135-fa43-4f62-a1c2-78bb416682a5 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b8c16135-fa43-4f62-a1c2-78bb416682a5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:43:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3249" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.298 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":151,"skipped":2417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:43:56.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 2 21:43:56.250: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6007" for this suite. • [SLOW TEST:5.578 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":152,"skipped":2493,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:01.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:44:02.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f" in namespace "downward-api-4676" to be "success or failure" Apr 2 21:44:02.223: INFO: Pod "downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.758667ms Apr 2 21:44:04.227: INFO: Pod "downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021519064s Apr 2 21:44:06.231: INFO: Pod "downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025478706s STEP: Saw pod success Apr 2 21:44:06.231: INFO: Pod "downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f" satisfied condition "success or failure" Apr 2 21:44:06.234: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f container client-container: STEP: delete the pod Apr 2 21:44:06.272: INFO: Waiting for pod downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f to disappear Apr 2 21:44:06.281: INFO: Pod downwardapi-volume-f695b62c-a0f8-49e3-b7e2-95092935586f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:06.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2495,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:06.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:44:06.347: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:11.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8112" for this suite. • [SLOW TEST:5.261 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":154,"skipped":2499,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:11.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 2 21:44:19.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:19.708: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 21:44:21.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:21.712: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 21:44:23.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:23.712: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 21:44:25.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:25.713: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 21:44:27.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:27.713: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 21:44:29.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 21:44:29.713: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:29.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4464" for this suite. • [SLOW TEST:18.180 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:29.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 2 21:44:29.796: INFO: Waiting up to 5m0s for pod "downward-api-49552084-7e13-4101-bee4-a6da953986cd" in namespace "downward-api-2710" to be "success or failure" Apr 2 21:44:29.803: INFO: Pod "downward-api-49552084-7e13-4101-bee4-a6da953986cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465555ms Apr 2 21:44:31.807: INFO: Pod "downward-api-49552084-7e13-4101-bee4-a6da953986cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010831933s Apr 2 21:44:33.812: INFO: Pod "downward-api-49552084-7e13-4101-bee4-a6da953986cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015890793s STEP: Saw pod success Apr 2 21:44:33.812: INFO: Pod "downward-api-49552084-7e13-4101-bee4-a6da953986cd" satisfied condition "success or failure" Apr 2 21:44:33.815: INFO: Trying to get logs from node jerma-worker pod downward-api-49552084-7e13-4101-bee4-a6da953986cd container dapi-container: STEP: delete the pod Apr 2 21:44:33.852: INFO: Waiting for pod downward-api-49552084-7e13-4101-bee4-a6da953986cd to disappear Apr 2 21:44:33.856: INFO: Pod downward-api-49552084-7e13-4101-bee4-a6da953986cd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:33.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2710" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2533,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:33.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:44:34.492: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:44:36.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460674, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460674, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460674, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460674, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:44:39.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:44:39.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3116-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:44:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7430" for this suite. STEP: Destroying namespace "webhook-7430-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":157,"skipped":2533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:44:40.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 2 21:44:40.960: INFO: PodSpec: initContainers in spec.initContainers Apr 2 21:45:29.805: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8b228652-2358-4abb-99cb-0dc3ae115cff", GenerateName:"", Namespace:"init-container-3742", SelfLink:"/api/v1/namespaces/init-container-3742/pods/pod-init-8b228652-2358-4abb-99cb-0dc3ae115cff", UID:"0657d0f4-ec2a-4a57-96e7-b7367ba20e5c", ResourceVersion:"4857940", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721460680, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"960089861"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4bhrn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0009a5640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bhrn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bhrn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bhrn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003fd30b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f56300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fd3140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fd3160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003fd3168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003fd316c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460681, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460681, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460681, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460681, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.253", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.253"}}, StartTime:(*v1.Time)(0xc002497be0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027ea9a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027eaa10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0d2faeb40a9a60ffd761b1abfa22ebc8557a21feac890a5a5badf9c435a4ee45", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002497c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002497c00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003fd31ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:45:29.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3742" for this suite. • [SLOW TEST:48.966 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":158,"skipped":2557,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:45:29.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 2 21:45:34.494: INFO: Successfully updated pod "labelsupdatee71577f5-facf-417a-bfa7-ed2fe0e2077a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:45:36.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4246" for this suite. • [SLOW TEST:6.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2576,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:45:36.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6hg2b in namespace proxy-5044 I0402 21:45:36.664673 6 runners.go:189] Created replication controller with name: proxy-service-6hg2b, namespace: proxy-5044, replica count: 1 I0402 21:45:37.715104 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:45:38.715328 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:45:39.715559 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:45:40.715790 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:41.716038 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:42.716306 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:43.716541 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:44.716774 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:45.717043 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:46.717390 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:47.717655 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:48.717923 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 21:45:49.718167 6 runners.go:189] proxy-service-6hg2b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 21:45:49.721: INFO: setup took 13.144043872s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 2 21:45:49.729: INFO: (0) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 7.457379ms) Apr 2 21:45:49.729: INFO: (0) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 7.971691ms) Apr 2 21:45:49.729: INFO: (0) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 8.142334ms) Apr 2 21:45:49.729: INFO: (0) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 8.085357ms) Apr 2 21:45:49.729: INFO: (0) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 8.076922ms) Apr 2 21:45:49.731: INFO: (0) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 9.12766ms) Apr 2 21:45:49.732: INFO: (0) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 10.952906ms) Apr 2 21:45:49.732: INFO: (0) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 10.997048ms) Apr 2 21:45:49.733: INFO: (0) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 11.909863ms) Apr 2 21:45:49.733: INFO: (0) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 11.815311ms) Apr 2 21:45:49.734: INFO: (0) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 12.328866ms) Apr 2 21:45:49.740: INFO: (0) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 18.900671ms) Apr 2 21:45:49.740: INFO: (0) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 18.820942ms) Apr 2 21:45:49.740: INFO: (0) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 18.92936ms) Apr 2 21:45:49.740: INFO: (0) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 18.884826ms) Apr 2 21:45:49.741: INFO: (0) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 3.571112ms) Apr 2 21:45:49.744: INFO: (1) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.549191ms) Apr 2 21:45:49.745: INFO: (1) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 3.858568ms) Apr 2 21:45:49.745: INFO: (1) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 4.166132ms) Apr 2 21:45:49.745: INFO: (1) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.125049ms) Apr 2 21:45:49.745: INFO: (1) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.11961ms) Apr 2 21:45:49.745: INFO: (1) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.363438ms) Apr 2 21:45:49.746: INFO: (1) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.583916ms) Apr 2 21:45:49.747: INFO: (1) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 6.007563ms) Apr 2 21:45:49.747: INFO: (1) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 6.07073ms) Apr 2 21:45:49.747: INFO: (1) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 6.200785ms) Apr 2 21:45:49.747: INFO: (1) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 6.065418ms) Apr 2 21:45:49.747: INFO: (1) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 6.148533ms) Apr 2 21:45:49.750: INFO: (2) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 2.80681ms) Apr 2 21:45:49.750: INFO: (2) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 2.80474ms) Apr 2 21:45:49.750: INFO: (2) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.027785ms) Apr 2 21:45:49.750: INFO: (2) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 3.324916ms) Apr 2 21:45:49.752: INFO: (2) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 5.788669ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.90794ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.87901ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 5.906014ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 6.017106ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 6.163127ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 6.177901ms) Apr 2 21:45:49.753: INFO: (2) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 6.317676ms) Apr 2 21:45:49.754: INFO: (2) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 6.759533ms) Apr 2 21:45:49.754: INFO: (2) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 7.067841ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.672389ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.90885ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.966424ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.05896ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 4.148267ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.045038ms) Apr 2 21:45:49.758: INFO: (3) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 4.274381ms) Apr 2 21:45:49.759: INFO: (3) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.395267ms) Apr 2 21:45:49.759: INFO: (3) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 4.314236ms) Apr 2 21:45:49.759: INFO: (3) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.860905ms) Apr 2 21:45:49.759: INFO: (3) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 5.024373ms) Apr 2 21:45:49.759: INFO: (3) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 5.251791ms) Apr 2 21:45:49.760: INFO: (3) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.239281ms) Apr 2 21:45:49.760: INFO: (3) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.303229ms) Apr 2 21:45:49.760: INFO: (3) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.604683ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 3.97645ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.970958ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 4.062348ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.060027ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.388144ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 4.406213ms) Apr 2 21:45:49.764: INFO: (4) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.466586ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 4.531635ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.89501ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.983533ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.925044ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 4.95422ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 4.978086ms) Apr 2 21:45:49.765: INFO: (4) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 3.836886ms) Apr 2 21:45:49.770: INFO: (5) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.607928ms) Apr 2 21:45:49.770: INFO: (5) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 4.587004ms) Apr 2 21:45:49.770: INFO: (5) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 3.992948ms) Apr 2 21:45:49.776: INFO: (6) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 4.105627ms) Apr 2 21:45:49.776: INFO: (6) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 4.127389ms) Apr 2 21:45:49.776: INFO: (6) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 4.809909ms) Apr 2 21:45:49.777: INFO: (6) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.766971ms) Apr 2 21:45:49.777: INFO: (6) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.938506ms) Apr 2 21:45:49.777: INFO: (6) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 5.131801ms) Apr 2 21:45:49.777: INFO: (6) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 5.185449ms) Apr 2 21:45:49.777: INFO: (6) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 5.284672ms) Apr 2 21:45:49.780: INFO: (7) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 2.924354ms) Apr 2 21:45:49.780: INFO: (7) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.009017ms) Apr 2 21:45:49.780: INFO: (7) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 2.994809ms) Apr 2 21:45:49.781: INFO: (7) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 3.842087ms) Apr 2 21:45:49.781: INFO: (7) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 4.237515ms) Apr 2 21:45:49.781: INFO: (7) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.230537ms) Apr 2 21:45:49.781: INFO: (7) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.243219ms) Apr 2 21:45:49.782: INFO: (7) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.465193ms) Apr 2 21:45:49.782: INFO: (7) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.444303ms) Apr 2 21:45:49.782: INFO: (7) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.548279ms) Apr 2 21:45:49.782: INFO: (7) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 4.770596ms) Apr 2 21:45:49.782: INFO: (7) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 4.707133ms) Apr 2 21:45:49.784: INFO: (8) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 2.062015ms) Apr 2 21:45:49.786: INFO: (8) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 4.042696ms) Apr 2 21:45:49.786: INFO: (8) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 4.14928ms) Apr 2 21:45:49.786: INFO: (8) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.16591ms) Apr 2 21:45:49.786: INFO: (8) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 4.15856ms) Apr 2 21:45:49.786: INFO: (8) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.210735ms) Apr 2 21:45:49.787: INFO: (8) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.505659ms) Apr 2 21:45:49.787: INFO: (8) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 4.60533ms) Apr 2 21:45:49.787: INFO: (8) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 4.546222ms) Apr 2 21:45:49.787: INFO: (8) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.611359ms) Apr 2 21:45:49.787: INFO: (8) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 4.717053ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 7.262483ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 7.387493ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 7.356773ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 7.366705ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 7.405356ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 7.467636ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 7.426436ms) Apr 2 21:45:49.794: INFO: (9) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 7.399433ms) Apr 2 21:45:49.795: INFO: (9) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 8.39255ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 10.028782ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 10.10604ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 10.262658ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 10.270695ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 10.312103ms) Apr 2 21:45:49.797: INFO: (9) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 10.255363ms) Apr 2 21:45:49.800: INFO: (10) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 2.760382ms) Apr 2 21:45:49.800: INFO: (10) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.040219ms) Apr 2 21:45:49.800: INFO: (10) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.202039ms) Apr 2 21:45:49.800: INFO: (10) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.176948ms) Apr 2 21:45:49.800: INFO: (10) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 3.447736ms) Apr 2 21:45:49.801: INFO: (10) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 3.963368ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.255796ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 5.248714ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.46958ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 5.499899ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.665758ms) Apr 2 21:45:49.803: INFO: (10) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 5.745267ms) Apr 2 21:45:49.806: INFO: (11) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 3.029517ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.347086ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.341103ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 3.427739ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.53765ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.60701ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.578164ms) Apr 2 21:45:49.807: INFO: (11) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 4.898793ms) Apr 2 21:45:49.808: INFO: (11) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.014106ms) Apr 2 21:45:49.811: INFO: (12) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 2.686149ms) Apr 2 21:45:49.812: INFO: (12) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 3.535774ms) Apr 2 21:45:49.812: INFO: (12) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 3.793642ms) Apr 2 21:45:49.812: INFO: (12) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.796062ms) Apr 2 21:45:49.812: INFO: (12) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.132298ms) Apr 2 21:45:49.812: INFO: (12) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.1581ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 4.62735ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.692684ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.682669ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.678706ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.924948ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.161169ms) Apr 2 21:45:49.813: INFO: (12) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.204781ms) Apr 2 21:45:49.814: INFO: (12) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.38947ms) Apr 2 21:45:49.817: INFO: (13) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.469147ms) Apr 2 21:45:49.817: INFO: (13) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.426056ms) Apr 2 21:45:49.817: INFO: (13) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 4.113146ms) Apr 2 21:45:49.818: INFO: (13) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.291632ms) Apr 2 21:45:49.818: INFO: (13) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 4.344508ms) Apr 2 21:45:49.818: INFO: (13) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 4.314493ms) Apr 2 21:45:49.818: INFO: (13) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.527857ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.615745ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 5.661999ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.644618ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.695265ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 5.687002ms) Apr 2 21:45:49.819: INFO: (13) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 5.70457ms) Apr 2 21:45:49.823: INFO: (14) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.585498ms) Apr 2 21:45:49.823: INFO: (14) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.83781ms) Apr 2 21:45:49.824: INFO: (14) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.323021ms) Apr 2 21:45:49.824: INFO: (14) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.391504ms) Apr 2 21:45:49.824: INFO: (14) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 4.445318ms) Apr 2 21:45:49.824: INFO: (14) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.698269ms) Apr 2 21:45:49.824: INFO: (14) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.761419ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 4.930679ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.972948ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.243733ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.208063ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 5.267918ms) Apr 2 21:45:49.825: INFO: (14) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 5.490171ms) Apr 2 21:45:49.827: INFO: (15) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 1.712042ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.617048ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.574857ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.787138ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 5.076437ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 5.05516ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 5.05501ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 5.00853ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 5.1236ms) Apr 2 21:45:49.830: INFO: (15) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 5.187354ms) Apr 2 21:45:49.831: INFO: (15) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 5.782871ms) Apr 2 21:45:49.831: INFO: (15) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 5.965492ms) Apr 2 21:45:49.831: INFO: (15) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 5.996861ms) Apr 2 21:45:49.831: INFO: (15) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 5.99372ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 3.341589ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test<... (200; 3.386436ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.487834ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.452122ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.48769ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 3.527683ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.944139ms) Apr 2 21:45:49.835: INFO: (16) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.825902ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 4.521815ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.718687ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 4.776285ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.825321ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 4.800729ms) Apr 2 21:45:49.836: INFO: (16) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 4.790369ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 3.6053ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.754768ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 3.847553ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 3.730852ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 3.716427ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.76944ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.743388ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 3.788057ms) Apr 2 21:45:49.840: INFO: (17) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname2/proxy/: bar (200; 3.951426ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname2/proxy/: bar (200; 4.340666ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 4.477264ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 4.574234ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/services/proxy-service-6hg2b:portname1/proxy/: foo (200; 4.660385ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 4.795618ms) Apr 2 21:45:49.841: INFO: (17) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 4.78947ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2/proxy/: test (200; 3.636371ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 3.760331ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 3.801031ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.021585ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:1080/proxy/: ... (200; 4.012208ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:462/proxy/: tls qux (200; 4.028838ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 4.1348ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 4.07901ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/http:proxy-service-6hg2b-pfkd2:160/proxy/: foo (200; 4.152568ms) Apr 2 21:45:49.845: INFO: (18) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: ... (200; 6.048987ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:1080/proxy/: test<... (200; 5.964786ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname2/proxy/: tls qux (200; 6.051452ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/services/https:proxy-service-6hg2b:tlsportname1/proxy/: tls baz (200; 6.040565ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/services/http:proxy-service-6hg2b:portname1/proxy/: foo (200; 6.026123ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:460/proxy/: tls baz (200; 6.018369ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/pods/https:proxy-service-6hg2b-pfkd2:443/proxy/: test (200; 6.198828ms) Apr 2 21:45:49.858: INFO: (19) /api/v1/namespaces/proxy-5044/pods/proxy-service-6hg2b-pfkd2:162/proxy/: bar (200; 6.095713ms) STEP: deleting ReplicationController proxy-service-6hg2b in namespace proxy-5044, will wait for the garbage collector to delete the pods Apr 2 21:45:49.917: INFO: Deleting ReplicationController proxy-service-6hg2b took: 6.822555ms Apr 2 21:45:50.218: INFO: Terminating ReplicationController proxy-service-6hg2b pods took: 300.271598ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:45:59.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5044" for this suite. • [SLOW TEST:22.996 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":160,"skipped":2586,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:45:59.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 2 21:46:00.179: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 2 21:46:02.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460760, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460760, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460760, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460760, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:46:05.224: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:46:05.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:46:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9577" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.940 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":161,"skipped":2605,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:46:06.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:46:41.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6326" for this suite. • [SLOW TEST:35.101 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":162,"skipped":2608,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:46:41.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:46:41.621: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 2 21:46:41.656: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 2 21:46:46.672: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 2 21:46:46.672: INFO: Creating deployment "test-rolling-update-deployment" Apr 2 21:46:46.675: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 2 21:46:46.680: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 2 21:46:48.687: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 2 21:46:48.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460806, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460806, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460806, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721460806, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 21:46:50.692: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 2 21:46:50.701: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1995 /apis/apps/v1/namespaces/deployment-1995/deployments/test-rolling-update-deployment e786ceae-3451-4f00-b60c-776d70c06d33 4858386 1 2020-04-02 21:46:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a626e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-02 21:46:46 +0000 UTC,LastTransitionTime:2020-04-02 21:46:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-02 21:46:50 +0000 UTC,LastTransitionTime:2020-04-02 21:46:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 2 21:46:50.704: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-1995 /apis/apps/v1/namespaces/deployment-1995/replicasets/test-rolling-update-deployment-67cf4f6444 2d07ea54-63ef-430d-a7fb-d93211a29372 4858375 1 2020-04-02 21:46:46 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e786ceae-3451-4f00-b60c-776d70c06d33 0xc004fdd3c7 0xc004fdd3c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004fdd438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:46:50.704: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 2 21:46:50.705: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1995 /apis/apps/v1/namespaces/deployment-1995/replicasets/test-rolling-update-controller ad02581e-bfc3-47d3-8f3d-945451eb2676 4858385 2 2020-04-02 21:46:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e786ceae-3451-4f00-b60c-776d70c06d33 0xc004fdd2f7 0xc004fdd2f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004fdd358 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 21:46:50.708: INFO: Pod "test-rolling-update-deployment-67cf4f6444-gvnd7" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-gvnd7 test-rolling-update-deployment-67cf4f6444- deployment-1995 /api/v1/namespaces/deployment-1995/pods/test-rolling-update-deployment-67cf4f6444-gvnd7 b5af54eb-9fa9-42fd-a0b2-f051395fe73c 4858374 0 2020-04-02 21:46:46 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 2d07ea54-63ef-430d-a7fb-d93211a29372 0xc004fdd887 0xc004fdd888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kxk5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kxk5r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kxk5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 21:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.2,StartTime:2020-04-02 21:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 21:46:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8bc6e73434197cbca547fbb9cdd3f1cd65d7c1bac729bff0fc01f87e92fe6af6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:46:50.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1995" for this suite. • [SLOW TEST:9.142 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":163,"skipped":2626,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:46:50.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 21:46:50.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3871' Apr 2 21:46:50.985: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 2 21:46:50.985: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Apr 2 21:46:51.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3871' Apr 2 21:46:51.103: INFO: stderr: "" Apr 2 21:46:51.103: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:46:51.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3871" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":164,"skipped":2628,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:46:51.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:46:51.185: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3" in namespace "security-context-test-1770" to be "success or failure" Apr 2 21:46:51.189: INFO: Pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789543ms Apr 2 21:46:53.193: INFO: Pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007721201s Apr 2 21:46:55.198: INFO: Pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012022952s Apr 2 21:46:55.198: INFO: Pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3" satisfied condition "success or failure" Apr 2 21:46:55.206: INFO: Got logs for pod "busybox-privileged-false-0de8d7b4-a27b-4098-8b10-b50be06f0bd3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:46:55.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1770" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2636,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:46:55.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 2 21:46:59.807: INFO: Successfully updated pod "labelsupdatee1fd4cc6-aee3-4e0b-9069-6d61d1722593" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:01.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3921" for this suite. • [SLOW TEST:6.616 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2643,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:01.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:47:01.894: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.832863ms) Apr 2 21:47:01.898: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.667726ms) Apr 2 21:47:01.901: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.160559ms) Apr 2 21:47:01.905: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.430471ms) Apr 2 21:47:01.908: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.093069ms) Apr 2 21:47:01.911: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.321032ms) Apr 2 21:47:01.915: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.610057ms) Apr 2 21:47:01.918: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.345222ms) Apr 2 21:47:01.935: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 16.636363ms) Apr 2 21:47:01.938: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.467165ms) Apr 2 21:47:01.941: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.035555ms) Apr 2 21:47:01.945: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.221328ms) Apr 2 21:47:01.948: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.305506ms) Apr 2 21:47:01.951: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.487192ms) Apr 2 21:47:01.954: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.13132ms) Apr 2 21:47:01.957: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.337244ms) Apr 2 21:47:01.960: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.662295ms) Apr 2 21:47:01.963: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.735504ms) Apr 2 21:47:01.966: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.924338ms) Apr 2 21:47:01.969: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.988031ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:01.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9363" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":167,"skipped":2650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:01.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 21:47:02.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9473' Apr 2 21:47:02.122: INFO: stderr: "" Apr 2 21:47:02.122: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 2 21:47:07.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9473 -o json' Apr 2 21:47:07.456: INFO: stderr: "" Apr 2 21:47:07.456: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-02T21:47:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9473\",\n \"resourceVersion\": \"4858534\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9473/pods/e2e-test-httpd-pod\",\n \"uid\": \"00ce5389-7ae1-4f55-8f33-5afd7e97bf8e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vbhsl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vbhsl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vbhsl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T21:47:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T21:47:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T21:47:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T21:47:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://219a9eb3513ecc05bb333dfffac7674ea0a2cbe8eeb91b660a8e2cc7c6e56f82\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-02T21:47:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.4\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.4\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-02T21:47:02Z\"\n }\n}\n" STEP: replace the image in the pod Apr 2 21:47:07.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9473' Apr 2 21:47:07.921: INFO: stderr: "" Apr 2 21:47:07.921: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Apr 2 21:47:07.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9473' Apr 2 21:47:19.494: INFO: stderr: "" Apr 2 21:47:19.494: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:19.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9473" for this suite. • [SLOW TEST:17.525 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":168,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:19.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:47:19.541: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 21:47:21.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 create -f -' Apr 2 21:47:24.596: INFO: stderr: "" Apr 2 21:47:24.596: INFO: stdout: "e2e-test-crd-publish-openapi-8242-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 2 21:47:24.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 delete e2e-test-crd-publish-openapi-8242-crds test-cr' Apr 2 21:47:24.712: INFO: stderr: "" Apr 2 21:47:24.712: INFO: stdout: "e2e-test-crd-publish-openapi-8242-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 2 21:47:24.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 apply -f -' Apr 2 21:47:24.934: INFO: stderr: "" Apr 2 21:47:24.934: INFO: stdout: "e2e-test-crd-publish-openapi-8242-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 2 21:47:24.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 delete e2e-test-crd-publish-openapi-8242-crds test-cr' Apr 2 21:47:25.039: INFO: stderr: "" Apr 2 21:47:25.039: INFO: stdout: "e2e-test-crd-publish-openapi-8242-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 2 21:47:25.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8242-crds' Apr 2 21:47:25.262: INFO: stderr: "" Apr 2 21:47:25.262: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8242-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:27.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2071" for this suite. • [SLOW TEST:7.652 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":169,"skipped":2781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:27.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:39.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9061" for this suite. • [SLOW TEST:12.176 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":170,"skipped":2810,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:39.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-19d3300d-8db9-4ddb-a706-382c4e7a0c83 STEP: Creating a pod to test consume configMaps Apr 2 21:47:39.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49" in namespace "configmap-8909" to be "success or failure" Apr 2 21:47:39.647: INFO: Pod "pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49": Phase="Pending", Reason="", readiness=false. Elapsed: 22.177244ms Apr 2 21:47:41.652: INFO: Pod "pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026340764s Apr 2 21:47:43.656: INFO: Pod "pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030368449s STEP: Saw pod success Apr 2 21:47:43.656: INFO: Pod "pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49" satisfied condition "success or failure" Apr 2 21:47:43.659: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49 container configmap-volume-test: STEP: delete the pod Apr 2 21:47:43.695: INFO: Waiting for pod pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49 to disappear Apr 2 21:47:43.731: INFO: Pod pod-configmaps-4a0b7fd9-e276-4ae4-830a-d502c7285f49 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:43.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8909" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:43.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 21:47:43.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7329' Apr 2 21:47:43.979: INFO: stderr: "" Apr 2 21:47:43.979: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Apr 2 21:47:43.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7329' Apr 2 21:47:49.288: INFO: stderr: "" Apr 2 21:47:49.288: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:49.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7329" for this suite. • [SLOW TEST:5.495 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":172,"skipped":2838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:49.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-68e01caf-5b8b-4bde-a191-704e7943ba92 STEP: Creating a pod to test consume configMaps Apr 2 21:47:49.370: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce" in namespace "projected-4740" to be "success or failure" Apr 2 21:47:49.402: INFO: Pod "pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce": Phase="Pending", Reason="", readiness=false. Elapsed: 31.503809ms Apr 2 21:47:51.408: INFO: Pod "pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037766954s Apr 2 21:47:53.412: INFO: Pod "pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041830815s STEP: Saw pod success Apr 2 21:47:53.412: INFO: Pod "pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce" satisfied condition "success or failure" Apr 2 21:47:53.415: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce container projected-configmap-volume-test: STEP: delete the pod Apr 2 21:47:53.448: INFO: Waiting for pod pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce to disappear Apr 2 21:47:53.458: INFO: Pod pod-projected-configmaps-12a8c0b1-6f56-4f34-81ae-ed370dd2edce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:47:53.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4740" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2871,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:47:53.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e62b6788-6958-4dbf-8b9f-c1b97faab5a4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e62b6788-6958-4dbf-8b9f-c1b97faab5a4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:25.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6186" for this suite. • [SLOW TEST:92.500 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2878,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:25.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 2 21:49:30.575: INFO: Successfully updated pod "pod-update-69d8a4c3-0147-4d96-a32a-e42d535f79b0" STEP: verifying the updated pod is in kubernetes Apr 2 21:49:30.596: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:30.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-353" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2882,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:30.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:45.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6277" for this suite. STEP: Destroying namespace "nsdeletetest-6354" for this suite. Apr 2 21:49:45.857: INFO: Namespace nsdeletetest-6354 was already deleted STEP: Destroying namespace "nsdeletetest-5427" for this suite. • [SLOW TEST:15.255 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":176,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:45.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:49:45.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea" in namespace "projected-7911" to be "success or failure" Apr 2 21:49:45.923: INFO: Pod "downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.036338ms Apr 2 21:49:47.927: INFO: Pod "downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014031803s Apr 2 21:49:49.932: INFO: Pod "downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018504824s STEP: Saw pod success Apr 2 21:49:49.932: INFO: Pod "downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea" satisfied condition "success or failure" Apr 2 21:49:49.936: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea container client-container: STEP: delete the pod Apr 2 21:49:49.955: INFO: Waiting for pod downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea to disappear Apr 2 21:49:49.959: INFO: Pod downwardapi-volume-3269c612-336b-48bd-8aa2-f568d55ceeea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7911" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2939,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:49.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:49:50.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823" in namespace "downward-api-8038" to be "success or failure" Apr 2 21:49:50.056: INFO: Pod "downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823": Phase="Pending", Reason="", readiness=false. Elapsed: 28.876428ms Apr 2 21:49:52.087: INFO: Pod "downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059950816s Apr 2 21:49:54.091: INFO: Pod "downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063782245s STEP: Saw pod success Apr 2 21:49:54.091: INFO: Pod "downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823" satisfied condition "success or failure" Apr 2 21:49:54.094: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823 container client-container: STEP: delete the pod Apr 2 21:49:54.169: INFO: Waiting for pod downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823 to disappear Apr 2 21:49:54.179: INFO: Pod downwardapi-volume-f83f3485-40b2-4d56-af25-13d702169823 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:54.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8038" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2958,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:54.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:49:54.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab" in namespace "downward-api-8119" to be "success or failure" Apr 2 21:49:54.287: INFO: Pod "downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab": Phase="Pending", Reason="", readiness=false. Elapsed: 23.248914ms Apr 2 21:49:56.291: INFO: Pod "downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02708681s Apr 2 21:49:58.296: INFO: Pod "downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031550129s STEP: Saw pod success Apr 2 21:49:58.296: INFO: Pod "downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab" satisfied condition "success or failure" Apr 2 21:49:58.299: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab container client-container: STEP: delete the pod Apr 2 21:49:58.319: INFO: Waiting for pod downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab to disappear Apr 2 21:49:58.329: INFO: Pod downwardapi-volume-1f80a8f2-fa3a-487f-a087-4b7e4e9928ab no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:49:58.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8119" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2968,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:49:58.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:50:04.494: INFO: DNS probes using dns-test-d28ed1fa-aff5-4d90-806e-834fbbba6207 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:50:10.599: INFO: File wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:10.603: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:10.603: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:15.608: INFO: File wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:15.611: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:15.611: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:20.607: INFO: File wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:20.614: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:20.614: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:25.608: INFO: File wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:25.612: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:25.612: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:30.632: INFO: File wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains '' instead of 'bar.example.com.' Apr 2 21:50:30.635: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 21:50:30.635: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:35.613: INFO: File jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local from pod dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 contains '' instead of 'bar.example.com.' Apr 2 21:50:35.614: INFO: Lookups using dns-8455/dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 failed for: [jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local] Apr 2 21:50:40.808: INFO: DNS probes using dns-test-51d881e3-ab14-407a-a22d-590ef957a8f3 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8455.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8455.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 21:50:47.276: INFO: DNS probes using dns-test-9792ec9d-c8eb-4af9-a3a5-1201f5f42cf4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:50:47.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8455" for this suite. • [SLOW TEST:49.052 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":180,"skipped":2969,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:50:47.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 2 21:50:47.428: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:50:55.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1146" for this suite. • [SLOW TEST:7.901 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":181,"skipped":2991,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:50:55.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:50:55.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:50:57.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461055, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461055, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461055, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461055, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:51:00.928: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 21:51:00.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3468-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:02.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-226" for this suite. STEP: Destroying namespace "webhook-226-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.966 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":182,"skipped":3010,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:02.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 2 21:51:06.870: INFO: Successfully updated pod "annotationupdate27724759-f45d-416d-adcc-1846965cbea1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:08.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-704" for this suite. • [SLOW TEST:6.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:08.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:25.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4905" for this suite. • [SLOW TEST:16.107 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":184,"skipped":3058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:25.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 21:51:25.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b" in namespace "projected-3517" to be "success or failure" Apr 2 21:51:25.206: INFO: Pod "downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386932ms Apr 2 21:51:27.211: INFO: Pod "downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012647202s Apr 2 21:51:29.215: INFO: Pod "downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017028738s STEP: Saw pod success Apr 2 21:51:29.215: INFO: Pod "downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b" satisfied condition "success or failure" Apr 2 21:51:29.218: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b container client-container: STEP: delete the pod Apr 2 21:51:29.239: INFO: Waiting for pod downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b to disappear Apr 2 21:51:29.243: INFO: Pod downwardapi-volume-0c80c935-280d-49b2-a3cb-de5c54c5f62b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:29.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3517" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3151,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:29.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 2 21:51:29.335: INFO: Waiting up to 5m0s for pod "pod-52b39eba-54f0-453c-8662-678d14be0ad2" in namespace "emptydir-342" to be "success or failure" Apr 2 21:51:29.351: INFO: Pod "pod-52b39eba-54f0-453c-8662-678d14be0ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.65524ms Apr 2 21:51:31.356: INFO: Pod "pod-52b39eba-54f0-453c-8662-678d14be0ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020713956s Apr 2 21:51:33.360: INFO: Pod "pod-52b39eba-54f0-453c-8662-678d14be0ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024838926s STEP: Saw pod success Apr 2 21:51:33.360: INFO: Pod "pod-52b39eba-54f0-453c-8662-678d14be0ad2" satisfied condition "success or failure" Apr 2 21:51:33.363: INFO: Trying to get logs from node jerma-worker2 pod pod-52b39eba-54f0-453c-8662-678d14be0ad2 container test-container: STEP: delete the pod Apr 2 21:51:33.399: INFO: Waiting for pod pod-52b39eba-54f0-453c-8662-678d14be0ad2 to disappear Apr 2 21:51:33.405: INFO: Pod pod-52b39eba-54f0-453c-8662-678d14be0ad2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:33.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-342" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3159,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:33.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4600/secret-test-f57a3a59-c715-4e84-8225-7b03af50a9e4 STEP: Creating a pod to test consume secrets Apr 2 21:51:33.509: INFO: Waiting up to 5m0s for pod "pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348" in namespace "secrets-4600" to be "success or failure" Apr 2 21:51:33.512: INFO: Pod "pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348": Phase="Pending", Reason="", readiness=false. Elapsed: 3.643241ms Apr 2 21:51:35.516: INFO: Pod "pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007508192s Apr 2 21:51:37.520: INFO: Pod "pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011479939s STEP: Saw pod success Apr 2 21:51:37.520: INFO: Pod "pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348" satisfied condition "success or failure" Apr 2 21:51:37.523: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348 container env-test: STEP: delete the pod Apr 2 21:51:37.539: INFO: Waiting for pod pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348 to disappear Apr 2 21:51:37.554: INFO: Pod pod-configmaps-5635b141-4500-495f-b1fb-2bac4469f348 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:37.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4600" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3179,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:37.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-311 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-311 I0402 21:51:37.724358 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-311, replica count: 2 I0402 21:51:40.774789 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 21:51:43.775034 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 21:51:43.775: INFO: Creating new exec pod Apr 2 21:51:48.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-311 execpodd7878 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 2 21:51:49.043: INFO: stderr: "I0402 21:51:48.947227 2319 log.go:172] (0xc0001149a0) (0xc0006a39a0) Create stream\nI0402 21:51:48.947289 2319 log.go:172] (0xc0001149a0) (0xc0006a39a0) Stream added, broadcasting: 1\nI0402 21:51:48.950831 2319 log.go:172] (0xc0001149a0) Reply frame received for 1\nI0402 21:51:48.950894 2319 log.go:172] (0xc0001149a0) (0xc000ace000) Create stream\nI0402 21:51:48.950916 2319 log.go:172] (0xc0001149a0) (0xc000ace000) Stream added, broadcasting: 3\nI0402 21:51:48.951872 2319 log.go:172] (0xc0001149a0) Reply frame received for 3\nI0402 21:51:48.951925 2319 log.go:172] (0xc0001149a0) (0xc000a88000) Create stream\nI0402 21:51:48.951944 2319 log.go:172] (0xc0001149a0) (0xc000a88000) Stream added, broadcasting: 5\nI0402 21:51:48.952874 2319 log.go:172] (0xc0001149a0) Reply frame received for 5\nI0402 21:51:49.035961 2319 log.go:172] (0xc0001149a0) Data frame received for 5\nI0402 21:51:49.035994 2319 log.go:172] (0xc000a88000) (5) Data frame handling\nI0402 21:51:49.036026 2319 log.go:172] (0xc000a88000) (5) Data frame sent\nI0402 21:51:49.036044 2319 log.go:172] (0xc0001149a0) Data frame received for 5\nI0402 21:51:49.036058 2319 log.go:172] (0xc000a88000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0402 21:51:49.036091 2319 log.go:172] (0xc000a88000) (5) Data frame sent\nI0402 21:51:49.036213 2319 log.go:172] (0xc0001149a0) Data frame received for 3\nI0402 21:51:49.036249 2319 log.go:172] (0xc000ace000) (3) Data frame handling\nI0402 21:51:49.036437 2319 log.go:172] (0xc0001149a0) Data frame received for 5\nI0402 21:51:49.036456 2319 log.go:172] (0xc000a88000) (5) Data frame handling\nI0402 21:51:49.038235 2319 log.go:172] (0xc0001149a0) Data frame received for 1\nI0402 21:51:49.038255 2319 log.go:172] (0xc0006a39a0) (1) Data frame handling\nI0402 21:51:49.038271 2319 log.go:172] (0xc0006a39a0) (1) Data frame sent\nI0402 21:51:49.038289 2319 log.go:172] (0xc0001149a0) (0xc0006a39a0) Stream removed, broadcasting: 1\nI0402 21:51:49.038312 2319 log.go:172] (0xc0001149a0) Go away received\nI0402 21:51:49.038665 2319 log.go:172] (0xc0001149a0) (0xc0006a39a0) Stream removed, broadcasting: 1\nI0402 21:51:49.038690 2319 log.go:172] (0xc0001149a0) (0xc000ace000) Stream removed, broadcasting: 3\nI0402 21:51:49.038703 2319 log.go:172] (0xc0001149a0) (0xc000a88000) Stream removed, broadcasting: 5\n" Apr 2 21:51:49.043: INFO: stdout: "" Apr 2 21:51:49.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-311 execpodd7878 -- /bin/sh -x -c nc -zv -t -w 2 10.100.108.31 80' Apr 2 21:51:49.241: INFO: stderr: "I0402 21:51:49.174481 2341 log.go:172] (0xc000982840) (0xc000ab4460) Create stream\nI0402 21:51:49.174546 2341 log.go:172] (0xc000982840) (0xc000ab4460) Stream added, broadcasting: 1\nI0402 21:51:49.179685 2341 log.go:172] (0xc000982840) Reply frame received for 1\nI0402 21:51:49.179724 2341 log.go:172] (0xc000982840) (0xc000606500) Create stream\nI0402 21:51:49.179737 2341 log.go:172] (0xc000982840) (0xc000606500) Stream added, broadcasting: 3\nI0402 21:51:49.180646 2341 log.go:172] (0xc000982840) Reply frame received for 3\nI0402 21:51:49.180697 2341 log.go:172] (0xc000982840) (0xc0007a52c0) Create stream\nI0402 21:51:49.180714 2341 log.go:172] (0xc000982840) (0xc0007a52c0) Stream added, broadcasting: 5\nI0402 21:51:49.181697 2341 log.go:172] (0xc000982840) Reply frame received for 5\nI0402 21:51:49.235584 2341 log.go:172] (0xc000982840) Data frame received for 3\nI0402 21:51:49.235624 2341 log.go:172] (0xc000606500) (3) Data frame handling\nI0402 21:51:49.235652 2341 log.go:172] (0xc000982840) Data frame received for 5\nI0402 21:51:49.235669 2341 log.go:172] (0xc0007a52c0) (5) Data frame handling\nI0402 21:51:49.235680 2341 log.go:172] (0xc0007a52c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.108.31 80\nConnection to 10.100.108.31 80 port [tcp/http] succeeded!\nI0402 21:51:49.235691 2341 log.go:172] (0xc000982840) Data frame received for 5\nI0402 21:51:49.235743 2341 log.go:172] (0xc0007a52c0) (5) Data frame handling\nI0402 21:51:49.236698 2341 log.go:172] (0xc000982840) Data frame received for 1\nI0402 21:51:49.236714 2341 log.go:172] (0xc000ab4460) (1) Data frame handling\nI0402 21:51:49.236728 2341 log.go:172] (0xc000ab4460) (1) Data frame sent\nI0402 21:51:49.236755 2341 log.go:172] (0xc000982840) (0xc000ab4460) Stream removed, broadcasting: 1\nI0402 21:51:49.236780 2341 log.go:172] (0xc000982840) Go away received\nI0402 21:51:49.237091 2341 log.go:172] (0xc000982840) (0xc000ab4460) Stream removed, broadcasting: 1\nI0402 21:51:49.237213 2341 log.go:172] (0xc000982840) (0xc000606500) Stream removed, broadcasting: 3\nI0402 21:51:49.237228 2341 log.go:172] (0xc000982840) (0xc0007a52c0) Stream removed, broadcasting: 5\n" Apr 2 21:51:49.241: INFO: stdout: "" Apr 2 21:51:49.241: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:49.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-311" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.770 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":188,"skipped":3189,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:49.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 21:51:53.426: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:51:53.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7439" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3195,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:51:53.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 in namespace container-probe-8269 Apr 2 21:51:57.553: INFO: Started pod liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 in namespace container-probe-8269 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 21:51:57.560: INFO: Initial restart count of pod liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is 0 Apr 2 21:52:11.606: INFO: Restart count of pod container-probe-8269/liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is now 1 (14.045726659s elapsed) Apr 2 21:52:31.648: INFO: Restart count of pod container-probe-8269/liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is now 2 (34.087022255s elapsed) Apr 2 21:52:51.689: INFO: Restart count of pod container-probe-8269/liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is now 3 (54.128098173s elapsed) Apr 2 21:53:11.729: INFO: Restart count of pod container-probe-8269/liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is now 4 (1m14.168821897s elapsed) Apr 2 21:54:17.934: INFO: Restart count of pod container-probe-8269/liveness-a2105ae6-11d9-4bfe-afbc-6b83c2f5f8f5 is now 5 (2m20.37380506s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:54:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8269" for this suite. • [SLOW TEST:144.489 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3198,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:54:17.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-21cf1644-4f72-4771-8d6a-213e8e8065f3 in namespace container-probe-7781 Apr 2 21:54:22.072: INFO: Started pod busybox-21cf1644-4f72-4771-8d6a-213e8e8065f3 in namespace container-probe-7781 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 21:54:22.075: INFO: Initial restart count of pod busybox-21cf1644-4f72-4771-8d6a-213e8e8065f3 is 0 Apr 2 21:55:16.216: INFO: Restart count of pod container-probe-7781/busybox-21cf1644-4f72-4771-8d6a-213e8e8065f3 is now 1 (54.141119435s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:55:16.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7781" for this suite. • [SLOW TEST:58.312 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:55:16.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:55:22.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8811" for this suite. STEP: Destroying namespace "nsdeletetest-3057" for this suite. Apr 2 21:55:22.598: INFO: Namespace nsdeletetest-3057 was already deleted STEP: Destroying namespace "nsdeletetest-3142" for this suite. • [SLOW TEST:6.329 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":192,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:55:22.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1506 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 2 21:55:22.702: INFO: Found 0 stateful pods, waiting for 3 Apr 2 21:55:32.707: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:55:32.707: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:55:32.707: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 2 21:55:42.707: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:55:42.707: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:55:42.707: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 2 21:55:42.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1506 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:55:42.993: INFO: stderr: "I0402 21:55:42.856480 2361 log.go:172] (0xc00095b1e0) (0xc00092a6e0) Create stream\nI0402 21:55:42.856562 2361 log.go:172] (0xc00095b1e0) (0xc00092a6e0) Stream added, broadcasting: 1\nI0402 21:55:42.861785 2361 log.go:172] (0xc00095b1e0) Reply frame received for 1\nI0402 21:55:42.861834 2361 log.go:172] (0xc00095b1e0) (0xc00071c5a0) Create stream\nI0402 21:55:42.861848 2361 log.go:172] (0xc00095b1e0) (0xc00071c5a0) Stream added, broadcasting: 3\nI0402 21:55:42.862765 2361 log.go:172] (0xc00095b1e0) Reply frame received for 3\nI0402 21:55:42.862785 2361 log.go:172] (0xc00095b1e0) (0xc00055f360) Create stream\nI0402 21:55:42.862792 2361 log.go:172] (0xc00095b1e0) (0xc00055f360) Stream added, broadcasting: 5\nI0402 21:55:42.863801 2361 log.go:172] (0xc00095b1e0) Reply frame received for 5\nI0402 21:55:42.941654 2361 log.go:172] (0xc00095b1e0) Data frame received for 5\nI0402 21:55:42.941697 2361 log.go:172] (0xc00055f360) (5) Data frame handling\nI0402 21:55:42.941729 2361 log.go:172] (0xc00055f360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:55:42.984453 2361 log.go:172] (0xc00095b1e0) Data frame received for 3\nI0402 21:55:42.984475 2361 log.go:172] (0xc00071c5a0) (3) Data frame handling\nI0402 21:55:42.984487 2361 log.go:172] (0xc00071c5a0) (3) Data frame sent\nI0402 21:55:42.984494 2361 log.go:172] (0xc00095b1e0) Data frame received for 3\nI0402 21:55:42.984500 2361 log.go:172] (0xc00071c5a0) (3) Data frame handling\nI0402 21:55:42.984832 2361 log.go:172] (0xc00095b1e0) Data frame received for 5\nI0402 21:55:42.984859 2361 log.go:172] (0xc00055f360) (5) Data frame handling\nI0402 21:55:42.986853 2361 log.go:172] (0xc00095b1e0) Data frame received for 1\nI0402 21:55:42.986864 2361 log.go:172] (0xc00092a6e0) (1) Data frame handling\nI0402 21:55:42.986871 2361 log.go:172] (0xc00092a6e0) (1) Data frame sent\nI0402 21:55:42.986939 2361 log.go:172] (0xc00095b1e0) (0xc00092a6e0) Stream removed, broadcasting: 1\nI0402 21:55:42.986960 2361 log.go:172] (0xc00095b1e0) Go away received\nI0402 21:55:42.987412 2361 log.go:172] (0xc00095b1e0) (0xc00092a6e0) Stream removed, broadcasting: 1\nI0402 21:55:42.987514 2361 log.go:172] (0xc00095b1e0) (0xc00071c5a0) Stream removed, broadcasting: 3\nI0402 21:55:42.987545 2361 log.go:172] (0xc00095b1e0) (0xc00055f360) Stream removed, broadcasting: 5\n" Apr 2 21:55:42.993: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:55:42.993: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 2 21:55:53.031: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 2 21:56:03.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1506 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:56:03.330: INFO: stderr: "I0402 21:56:03.237270 2381 log.go:172] (0xc000b00000) (0xc000aee000) Create stream\nI0402 21:56:03.237344 2381 log.go:172] (0xc000b00000) (0xc000aee000) Stream added, broadcasting: 1\nI0402 21:56:03.243004 2381 log.go:172] (0xc000b00000) Reply frame received for 1\nI0402 21:56:03.243059 2381 log.go:172] (0xc000b00000) (0xc000a12000) Create stream\nI0402 21:56:03.243078 2381 log.go:172] (0xc000b00000) (0xc000a12000) Stream added, broadcasting: 3\nI0402 21:56:03.244029 2381 log.go:172] (0xc000b00000) Reply frame received for 3\nI0402 21:56:03.244074 2381 log.go:172] (0xc000b00000) (0xc0006aba40) Create stream\nI0402 21:56:03.244087 2381 log.go:172] (0xc000b00000) (0xc0006aba40) Stream added, broadcasting: 5\nI0402 21:56:03.244757 2381 log.go:172] (0xc000b00000) Reply frame received for 5\nI0402 21:56:03.323289 2381 log.go:172] (0xc000b00000) Data frame received for 3\nI0402 21:56:03.323330 2381 log.go:172] (0xc000a12000) (3) Data frame handling\nI0402 21:56:03.323369 2381 log.go:172] (0xc000b00000) Data frame received for 5\nI0402 21:56:03.323411 2381 log.go:172] (0xc0006aba40) (5) Data frame handling\nI0402 21:56:03.323435 2381 log.go:172] (0xc0006aba40) (5) Data frame sent\nI0402 21:56:03.323447 2381 log.go:172] (0xc000b00000) Data frame received for 5\nI0402 21:56:03.323460 2381 log.go:172] (0xc0006aba40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:56:03.323494 2381 log.go:172] (0xc000a12000) (3) Data frame sent\nI0402 21:56:03.323525 2381 log.go:172] (0xc000b00000) Data frame received for 3\nI0402 21:56:03.323543 2381 log.go:172] (0xc000a12000) (3) Data frame handling\nI0402 21:56:03.324933 2381 log.go:172] (0xc000b00000) Data frame received for 1\nI0402 21:56:03.324950 2381 log.go:172] (0xc000aee000) (1) Data frame handling\nI0402 21:56:03.324960 2381 log.go:172] (0xc000aee000) (1) Data frame sent\nI0402 21:56:03.324971 2381 log.go:172] (0xc000b00000) (0xc000aee000) Stream removed, broadcasting: 1\nI0402 21:56:03.325011 2381 log.go:172] (0xc000b00000) Go away received\nI0402 21:56:03.325461 2381 log.go:172] (0xc000b00000) (0xc000aee000) Stream removed, broadcasting: 1\nI0402 21:56:03.325492 2381 log.go:172] (0xc000b00000) (0xc000a12000) Stream removed, broadcasting: 3\nI0402 21:56:03.325512 2381 log.go:172] (0xc000b00000) (0xc0006aba40) Stream removed, broadcasting: 5\n" Apr 2 21:56:03.330: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:56:03.330: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:56:13.356: INFO: Waiting for StatefulSet statefulset-1506/ss2 to complete update Apr 2 21:56:13.356: INFO: Waiting for Pod statefulset-1506/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 21:56:13.356: INFO: Waiting for Pod statefulset-1506/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 21:56:23.363: INFO: Waiting for StatefulSet statefulset-1506/ss2 to complete update Apr 2 21:56:23.363: INFO: Waiting for Pod statefulset-1506/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 2 21:56:33.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1506 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 21:56:33.613: INFO: stderr: "I0402 21:56:33.494008 2402 log.go:172] (0xc000104f20) (0xc0008fa000) Create stream\nI0402 21:56:33.494060 2402 log.go:172] (0xc000104f20) (0xc0008fa000) Stream added, broadcasting: 1\nI0402 21:56:33.496805 2402 log.go:172] (0xc000104f20) Reply frame received for 1\nI0402 21:56:33.496854 2402 log.go:172] (0xc000104f20) (0xc0006cfae0) Create stream\nI0402 21:56:33.496868 2402 log.go:172] (0xc000104f20) (0xc0006cfae0) Stream added, broadcasting: 3\nI0402 21:56:33.498349 2402 log.go:172] (0xc000104f20) Reply frame received for 3\nI0402 21:56:33.498375 2402 log.go:172] (0xc000104f20) (0xc0006cfcc0) Create stream\nI0402 21:56:33.498385 2402 log.go:172] (0xc000104f20) (0xc0006cfcc0) Stream added, broadcasting: 5\nI0402 21:56:33.499860 2402 log.go:172] (0xc000104f20) Reply frame received for 5\nI0402 21:56:33.573519 2402 log.go:172] (0xc000104f20) Data frame received for 5\nI0402 21:56:33.573537 2402 log.go:172] (0xc0006cfcc0) (5) Data frame handling\nI0402 21:56:33.573548 2402 log.go:172] (0xc0006cfcc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 21:56:33.604820 2402 log.go:172] (0xc000104f20) Data frame received for 3\nI0402 21:56:33.604841 2402 log.go:172] (0xc0006cfae0) (3) Data frame handling\nI0402 21:56:33.604852 2402 log.go:172] (0xc0006cfae0) (3) Data frame sent\nI0402 21:56:33.605024 2402 log.go:172] (0xc000104f20) Data frame received for 5\nI0402 21:56:33.605055 2402 log.go:172] (0xc0006cfcc0) (5) Data frame handling\nI0402 21:56:33.605087 2402 log.go:172] (0xc000104f20) Data frame received for 3\nI0402 21:56:33.605274 2402 log.go:172] (0xc0006cfae0) (3) Data frame handling\nI0402 21:56:33.607247 2402 log.go:172] (0xc000104f20) Data frame received for 1\nI0402 21:56:33.607270 2402 log.go:172] (0xc0008fa000) (1) Data frame handling\nI0402 21:56:33.607284 2402 log.go:172] (0xc0008fa000) (1) Data frame sent\nI0402 21:56:33.607305 2402 log.go:172] (0xc000104f20) (0xc0008fa000) Stream removed, broadcasting: 1\nI0402 21:56:33.607344 2402 log.go:172] (0xc000104f20) Go away received\nI0402 21:56:33.607787 2402 log.go:172] (0xc000104f20) (0xc0008fa000) Stream removed, broadcasting: 1\nI0402 21:56:33.607812 2402 log.go:172] (0xc000104f20) (0xc0006cfae0) Stream removed, broadcasting: 3\nI0402 21:56:33.607826 2402 log.go:172] (0xc000104f20) (0xc0006cfcc0) Stream removed, broadcasting: 5\n" Apr 2 21:56:33.613: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 21:56:33.613: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 21:56:43.645: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 2 21:56:53.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1506 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 21:56:53.954: INFO: stderr: "I0402 21:56:53.859091 2425 log.go:172] (0xc0005d8dc0) (0xc000926000) Create stream\nI0402 21:56:53.859154 2425 log.go:172] (0xc0005d8dc0) (0xc000926000) Stream added, broadcasting: 1\nI0402 21:56:53.861579 2425 log.go:172] (0xc0005d8dc0) Reply frame received for 1\nI0402 21:56:53.861623 2425 log.go:172] (0xc0005d8dc0) (0xc000691ae0) Create stream\nI0402 21:56:53.861634 2425 log.go:172] (0xc0005d8dc0) (0xc000691ae0) Stream added, broadcasting: 3\nI0402 21:56:53.862485 2425 log.go:172] (0xc0005d8dc0) Reply frame received for 3\nI0402 21:56:53.862514 2425 log.go:172] (0xc0005d8dc0) (0xc0009260a0) Create stream\nI0402 21:56:53.862523 2425 log.go:172] (0xc0005d8dc0) (0xc0009260a0) Stream added, broadcasting: 5\nI0402 21:56:53.863340 2425 log.go:172] (0xc0005d8dc0) Reply frame received for 5\nI0402 21:56:53.946390 2425 log.go:172] (0xc0005d8dc0) Data frame received for 5\nI0402 21:56:53.946461 2425 log.go:172] (0xc0009260a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 21:56:53.946525 2425 log.go:172] (0xc0005d8dc0) Data frame received for 3\nI0402 21:56:53.946556 2425 log.go:172] (0xc000691ae0) (3) Data frame handling\nI0402 21:56:53.946573 2425 log.go:172] (0xc000691ae0) (3) Data frame sent\nI0402 21:56:53.946597 2425 log.go:172] (0xc0005d8dc0) Data frame received for 3\nI0402 21:56:53.946617 2425 log.go:172] (0xc000691ae0) (3) Data frame handling\nI0402 21:56:53.946644 2425 log.go:172] (0xc0009260a0) (5) Data frame sent\nI0402 21:56:53.946670 2425 log.go:172] (0xc0005d8dc0) Data frame received for 5\nI0402 21:56:53.946682 2425 log.go:172] (0xc0009260a0) (5) Data frame handling\nI0402 21:56:53.948179 2425 log.go:172] (0xc0005d8dc0) Data frame received for 1\nI0402 21:56:53.948202 2425 log.go:172] (0xc000926000) (1) Data frame handling\nI0402 21:56:53.948214 2425 log.go:172] (0xc000926000) (1) Data frame sent\nI0402 21:56:53.948233 2425 log.go:172] (0xc0005d8dc0) (0xc000926000) Stream removed, broadcasting: 1\nI0402 21:56:53.948255 2425 log.go:172] (0xc0005d8dc0) Go away received\nI0402 21:56:53.948681 2425 log.go:172] (0xc0005d8dc0) (0xc000926000) Stream removed, broadcasting: 1\nI0402 21:56:53.948704 2425 log.go:172] (0xc0005d8dc0) (0xc000691ae0) Stream removed, broadcasting: 3\nI0402 21:56:53.948716 2425 log.go:172] (0xc0005d8dc0) (0xc0009260a0) Stream removed, broadcasting: 5\n" Apr 2 21:56:53.954: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 21:56:53.954: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 21:57:14.027: INFO: Waiting for StatefulSet statefulset-1506/ss2 to complete update Apr 2 21:57:14.027: INFO: Waiting for Pod statefulset-1506/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 21:57:24.035: INFO: Deleting all statefulset in ns statefulset-1506 Apr 2 21:57:24.038: INFO: Scaling statefulset ss2 to 0 Apr 2 21:57:44.068: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 21:57:44.071: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:57:44.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1506" for this suite. • [SLOW TEST:141.488 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":193,"skipped":3296,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:57:44.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 2 21:57:44.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 21:57:44.174: INFO: Waiting for terminating namespaces to be deleted... Apr 2 21:57:44.177: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 2 21:57:44.193: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.193: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:57:44.193: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.193: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:57:44.193: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 2 21:57:44.211: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.211: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 21:57:44.211: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.211: INFO: Container kube-bench ready: false, restart count 0 Apr 2 21:57:44.211: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.211: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 21:57:44.212: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 2 21:57:44.212: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 2 21:57:44.314: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 2 21:57:44.314: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 2 21:57:44.314: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 2 21:57:44.314: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 2 21:57:44.314: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 2 21:57:44.321: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2.16021f828b1dea88], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5583/filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2.16021f82d83a281f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2.16021f8316f15a9d], Reason = [Created], Message = [Created container filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2.16021f832d82695e], Reason = [Started], Message = [Started container filler-pod-e997cef7-622d-4a23-a050-782af36ee7f2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846.16021f828c8ffee5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5583/filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846.16021f8304007004], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846.16021f833e69f22e], Reason = [Created], Message = [Created container filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846] STEP: Considering event: Type = [Normal], Name = [filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846.16021f834d5b4fb5], Reason = [Started], Message = [Started container filler-pod-f85708fb-aab7-45b8-bcee-501fdf87b846] STEP: Considering event: Type = [Warning], Name = [additional-pod.16021f837be8d595], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:57:49.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5583" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.455 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":194,"skipped":3301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:57:49.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:57:50.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:57:52.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461470, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461470, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461470, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461470, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:57:55.343: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:57:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7860" for this suite. STEP: Destroying namespace "webhook-7860-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.545 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":195,"skipped":3335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:57:56.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-fbe9057a-19fd-4fd5-b415-926676e9c76e STEP: Creating a pod to test consume configMaps Apr 2 21:57:56.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c" in namespace "projected-6379" to be "success or failure" Apr 2 21:57:56.216: INFO: Pod "pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.297271ms Apr 2 21:57:58.220: INFO: Pod "pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012111858s Apr 2 21:58:00.224: INFO: Pod "pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016201073s STEP: Saw pod success Apr 2 21:58:00.224: INFO: Pod "pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c" satisfied condition "success or failure" Apr 2 21:58:00.227: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c container projected-configmap-volume-test: STEP: delete the pod Apr 2 21:58:00.248: INFO: Waiting for pod pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c to disappear Apr 2 21:58:00.252: INFO: Pod pod-projected-configmaps-3f0b751b-bdd2-45eb-b6a7-3865f1e42e9c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:00.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6379" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:00.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 2 21:58:00.314: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:07.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8913" for this suite. • [SLOW TEST:7.113 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":197,"skipped":3453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:07.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:58:08.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:58:10.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461488, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461488, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 21:58:13.351: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:25.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1439" for this suite. STEP: Destroying namespace "webhook-1439-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":198,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:25.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f7d8702d-26f6-4d7c-bfd5-914aacf62430 STEP: Creating a pod to test consume configMaps Apr 2 21:58:25.698: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e" in namespace "projected-3281" to be "success or failure" Apr 2 21:58:25.703: INFO: Pod "pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.279427ms Apr 2 21:58:27.707: INFO: Pod "pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009228096s Apr 2 21:58:29.711: INFO: Pod "pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013112438s STEP: Saw pod success Apr 2 21:58:29.711: INFO: Pod "pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e" satisfied condition "success or failure" Apr 2 21:58:29.714: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e container projected-configmap-volume-test: STEP: delete the pod Apr 2 21:58:29.732: INFO: Waiting for pod pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e to disappear Apr 2 21:58:29.736: INFO: Pod pod-projected-configmaps-7f642a12-ceb5-4cd8-9923-d491d779244e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:29.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3281" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3515,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:29.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 2 21:58:29.832: INFO: Waiting up to 5m0s for pod "downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2" in namespace "downward-api-5901" to be "success or failure" Apr 2 21:58:29.839: INFO: Pod "downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.665758ms Apr 2 21:58:31.843: INFO: Pod "downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011111069s Apr 2 21:58:33.847: INFO: Pod "downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014550279s STEP: Saw pod success Apr 2 21:58:33.847: INFO: Pod "downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2" satisfied condition "success or failure" Apr 2 21:58:33.850: INFO: Trying to get logs from node jerma-worker pod downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2 container dapi-container: STEP: delete the pod Apr 2 21:58:33.885: INFO: Waiting for pod downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2 to disappear Apr 2 21:58:33.898: INFO: Pod downward-api-82f11cff-e72f-4e2d-bd7f-1be7033810c2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:33.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5901" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3524,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:33.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-84301d67-1f47-4ee2-8d45-1f69aa0d5b3e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:38.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2165" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3535,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:38.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 2 21:58:38.100: INFO: Waiting up to 5m0s for pod "client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca" in namespace "containers-1050" to be "success or failure" Apr 2 21:58:38.104: INFO: Pod "client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521328ms Apr 2 21:58:40.108: INFO: Pod "client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007536302s Apr 2 21:58:42.112: INFO: Pod "client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012111153s STEP: Saw pod success Apr 2 21:58:42.112: INFO: Pod "client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca" satisfied condition "success or failure" Apr 2 21:58:42.115: INFO: Trying to get logs from node jerma-worker pod client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca container test-container: STEP: delete the pod Apr 2 21:58:42.134: INFO: Waiting for pod client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca to disappear Apr 2 21:58:42.136: INFO: Pod client-containers-f9aa83d6-4ada-4171-b468-3dd39efe67ca no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:42.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1050" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3535,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:42.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:58:46.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1202" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3535,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:58:46.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-ec9358c6-b0a6-4f1c-93df-80ca400c7f00 STEP: Creating configMap with name cm-test-opt-upd-415f7dcb-eaa4-4497-b883-5163025500a8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ec9358c6-b0a6-4f1c-93df-80ca400c7f00 STEP: Updating configmap cm-test-opt-upd-415f7dcb-eaa4-4497-b883-5163025500a8 STEP: Creating configMap with name cm-test-opt-create-8c69eaa0-e106-4f2d-9b91-3407a9eec6bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 21:59:56.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3150" for this suite. • [SLOW TEST:70.689 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3553,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 21:59:56.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 21:59:57.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 21:59:59.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:00:01.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721461597, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:00:04.539: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:00:04.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4191" for this suite. STEP: Destroying namespace "webhook-4191-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.826 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":205,"skipped":3557,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:00:04.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-nbnj STEP: Creating a pod to test atomic-volume-subpath Apr 2 22:00:05.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nbnj" in namespace "subpath-191" to be "success or failure" Apr 2 22:00:05.093: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Pending", Reason="", readiness=false. Elapsed: 48.911698ms Apr 2 22:00:07.097: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052708462s Apr 2 22:00:09.101: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 4.05699162s Apr 2 22:00:11.111: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 6.066181685s Apr 2 22:00:13.114: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 8.069812375s Apr 2 22:00:15.118: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 10.073315665s Apr 2 22:00:17.122: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 12.077338735s Apr 2 22:00:19.126: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 14.08143298s Apr 2 22:00:21.131: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 16.086101061s Apr 2 22:00:23.135: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 18.090567619s Apr 2 22:00:25.139: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 20.09476249s Apr 2 22:00:27.143: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Running", Reason="", readiness=true. Elapsed: 22.098813999s Apr 2 22:00:29.148: INFO: Pod "pod-subpath-test-downwardapi-nbnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.103087253s STEP: Saw pod success Apr 2 22:00:29.148: INFO: Pod "pod-subpath-test-downwardapi-nbnj" satisfied condition "success or failure" Apr 2 22:00:29.151: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-nbnj container test-container-subpath-downwardapi-nbnj: STEP: delete the pod Apr 2 22:00:29.186: INFO: Waiting for pod pod-subpath-test-downwardapi-nbnj to disappear Apr 2 22:00:29.208: INFO: Pod pod-subpath-test-downwardapi-nbnj no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nbnj Apr 2 22:00:29.208: INFO: Deleting pod "pod-subpath-test-downwardapi-nbnj" in namespace "subpath-191" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:00:29.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-191" for this suite. • [SLOW TEST:24.444 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":206,"skipped":3562,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:00:29.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:00:42.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6110" for this suite. • [SLOW TEST:13.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":207,"skipped":3582,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:00:42.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-426c STEP: Creating a pod to test atomic-volume-subpath Apr 2 22:00:42.484: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-426c" in namespace "subpath-4343" to be "success or failure" Apr 2 22:00:42.505: INFO: Pod "pod-subpath-test-secret-426c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.521887ms Apr 2 22:00:44.509: INFO: Pod "pod-subpath-test-secret-426c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024734447s Apr 2 22:00:46.513: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 4.028916891s Apr 2 22:00:48.517: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 6.032940788s Apr 2 22:00:50.522: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 8.03719766s Apr 2 22:00:52.526: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 10.041136433s Apr 2 22:00:54.529: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 12.044517425s Apr 2 22:00:56.532: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 14.047887774s Apr 2 22:00:58.548: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 16.063766601s Apr 2 22:01:00.563: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 18.07817619s Apr 2 22:01:02.567: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 20.082756816s Apr 2 22:01:04.572: INFO: Pod "pod-subpath-test-secret-426c": Phase="Running", Reason="", readiness=true. Elapsed: 22.087411341s Apr 2 22:01:06.576: INFO: Pod "pod-subpath-test-secret-426c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.091668011s STEP: Saw pod success Apr 2 22:01:06.576: INFO: Pod "pod-subpath-test-secret-426c" satisfied condition "success or failure" Apr 2 22:01:06.579: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-426c container test-container-subpath-secret-426c: STEP: delete the pod Apr 2 22:01:06.598: INFO: Waiting for pod pod-subpath-test-secret-426c to disappear Apr 2 22:01:06.608: INFO: Pod pod-subpath-test-secret-426c no longer exists STEP: Deleting pod pod-subpath-test-secret-426c Apr 2 22:01:06.608: INFO: Deleting pod "pod-subpath-test-secret-426c" in namespace "subpath-4343" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:01:06.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4343" for this suite. • [SLOW TEST:24.234 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":208,"skipped":3590,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:01:06.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 2 22:01:06.690: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix749924042/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:01:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6558" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":209,"skipped":3605,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:01:06.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c4f96d07-6285-4e2f-b406-2cc60f8d805b STEP: Creating a pod to test consume configMaps Apr 2 22:01:06.867: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409" in namespace "configmap-9163" to be "success or failure" Apr 2 22:01:06.878: INFO: Pod "pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364032ms Apr 2 22:01:08.881: INFO: Pod "pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014099804s Apr 2 22:01:10.886: INFO: Pod "pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018279331s STEP: Saw pod success Apr 2 22:01:10.886: INFO: Pod "pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409" satisfied condition "success or failure" Apr 2 22:01:10.889: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409 container configmap-volume-test: STEP: delete the pod Apr 2 22:01:10.905: INFO: Waiting for pod pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409 to disappear Apr 2 22:01:10.916: INFO: Pod pod-configmaps-eb9be945-02c2-439d-a5c1-d429814c0409 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:01:10.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9163" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3617,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:01:10.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:01:10.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-84" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":211,"skipped":3619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:01:10.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:01:11.056: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 2 22:01:13.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 create -f -' Apr 2 22:01:17.188: INFO: stderr: "" Apr 2 22:01:17.188: INFO: stdout: "e2e-test-crd-publish-openapi-8238-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 2 22:01:17.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 delete e2e-test-crd-publish-openapi-8238-crds test-foo' Apr 2 22:01:17.285: INFO: stderr: "" Apr 2 22:01:17.285: INFO: stdout: "e2e-test-crd-publish-openapi-8238-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 2 22:01:17.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 apply -f -' Apr 2 22:01:17.509: INFO: stderr: "" Apr 2 22:01:17.509: INFO: stdout: "e2e-test-crd-publish-openapi-8238-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 2 22:01:17.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 delete e2e-test-crd-publish-openapi-8238-crds test-foo' Apr 2 22:01:17.637: INFO: stderr: "" Apr 2 22:01:17.637: INFO: stdout: "e2e-test-crd-publish-openapi-8238-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 2 22:01:17.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 create -f -' Apr 2 22:01:17.860: INFO: rc: 1 Apr 2 22:01:17.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 apply -f -' Apr 2 22:01:18.084: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 2 22:01:18.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 create -f -' Apr 2 22:01:18.312: INFO: rc: 1 Apr 2 22:01:18.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7683 apply -f -' Apr 2 22:01:18.549: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 2 22:01:18.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8238-crds' Apr 2 22:01:18.791: INFO: stderr: "" Apr 2 22:01:18.791: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8238-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 2 22:01:18.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8238-crds.metadata' Apr 2 22:01:19.031: INFO: stderr: "" Apr 2 22:01:19.031: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8238-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 2 22:01:19.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8238-crds.spec' Apr 2 22:01:19.261: INFO: stderr: "" Apr 2 22:01:19.261: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8238-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 2 22:01:19.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8238-crds.spec.bars' Apr 2 22:01:19.498: INFO: stderr: "" Apr 2 22:01:19.498: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8238-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 2 22:01:19.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8238-crds.spec.bars2' Apr 2 22:01:19.720: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:01:21.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7683" for this suite. • [SLOW TEST:10.616 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":212,"skipped":3644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:01:21.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a87b0ca0-7c1d-436d-b0a4-e8d66e4fd268 in namespace container-probe-2220 Apr 2 22:01:25.718: INFO: Started pod busybox-a87b0ca0-7c1d-436d-b0a4-e8d66e4fd268 in namespace container-probe-2220 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 22:01:25.721: INFO: Initial restart count of pod busybox-a87b0ca0-7c1d-436d-b0a4-e8d66e4fd268 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:05:26.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2220" for this suite. • [SLOW TEST:244.700 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3668,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:05:26.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b9cff51e-8e66-4ee4-a514-74072af7162c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b9cff51e-8e66-4ee4-a514-74072af7162c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:07:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7256" for this suite. • [SLOW TEST:94.627 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3678,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:07:00.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-bbffee69-3765-483e-bf43-db734996d85c STEP: Creating a pod to test consume configMaps Apr 2 22:07:01.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8" in namespace "projected-631" to be "success or failure" Apr 2 22:07:01.014: INFO: Pod "pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839239ms Apr 2 22:07:03.018: INFO: Pod "pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007895084s Apr 2 22:07:05.023: INFO: Pod "pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012402908s STEP: Saw pod success Apr 2 22:07:05.023: INFO: Pod "pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8" satisfied condition "success or failure" Apr 2 22:07:05.026: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8 container projected-configmap-volume-test: STEP: delete the pod Apr 2 22:07:05.051: INFO: Waiting for pod pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8 to disappear Apr 2 22:07:05.118: INFO: Pod pod-projected-configmaps-f782ed36-f7dd-4678-b692-d21b5135dad8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:07:05.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-631" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3679,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:07:05.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:07:16.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9402" for this suite. • [SLOW TEST:11.115 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":216,"skipped":3680,"failed":0} SSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:07:16.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:07:16.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1399" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":217,"skipped":3683,"failed":0} ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:07:16.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-7449 STEP: creating replication controller nodeport-test in namespace services-7449 I0402 22:07:16.542482 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7449, replica count: 2 I0402 22:07:19.593006 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 22:07:22.593325 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 22:07:22.593: INFO: Creating new exec pod Apr 2 22:07:27.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7449 execpodjv82x -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 2 22:07:27.851: INFO: stderr: "I0402 22:07:27.760261 2754 log.go:172] (0xc00061e6e0) (0xc0005c8000) Create stream\nI0402 22:07:27.760356 2754 log.go:172] (0xc00061e6e0) (0xc0005c8000) Stream added, broadcasting: 1\nI0402 22:07:27.763660 2754 log.go:172] (0xc00061e6e0) Reply frame received for 1\nI0402 22:07:27.763691 2754 log.go:172] (0xc00061e6e0) (0xc0005c8140) Create stream\nI0402 22:07:27.763699 2754 log.go:172] (0xc00061e6e0) (0xc0005c8140) Stream added, broadcasting: 3\nI0402 22:07:27.765076 2754 log.go:172] (0xc00061e6e0) Reply frame received for 3\nI0402 22:07:27.765247 2754 log.go:172] (0xc00061e6e0) (0xc0005c81e0) Create stream\nI0402 22:07:27.765265 2754 log.go:172] (0xc00061e6e0) (0xc0005c81e0) Stream added, broadcasting: 5\nI0402 22:07:27.766341 2754 log.go:172] (0xc00061e6e0) Reply frame received for 5\nI0402 22:07:27.843580 2754 log.go:172] (0xc00061e6e0) Data frame received for 5\nI0402 22:07:27.843617 2754 log.go:172] (0xc0005c81e0) (5) Data frame handling\nI0402 22:07:27.843634 2754 log.go:172] (0xc0005c81e0) (5) Data frame sent\nI0402 22:07:27.843645 2754 log.go:172] (0xc00061e6e0) Data frame received for 5\nI0402 22:07:27.843654 2754 log.go:172] (0xc0005c81e0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0402 22:07:27.843683 2754 log.go:172] (0xc0005c81e0) (5) Data frame sent\nI0402 22:07:27.843724 2754 log.go:172] (0xc00061e6e0) Data frame received for 5\nI0402 22:07:27.843858 2754 log.go:172] (0xc0005c81e0) (5) Data frame handling\nI0402 22:07:27.844174 2754 log.go:172] (0xc00061e6e0) Data frame received for 3\nI0402 22:07:27.844205 2754 log.go:172] (0xc0005c8140) (3) Data frame handling\nI0402 22:07:27.846036 2754 log.go:172] (0xc00061e6e0) Data frame received for 1\nI0402 22:07:27.846050 2754 log.go:172] (0xc0005c8000) (1) Data frame handling\nI0402 22:07:27.846057 2754 log.go:172] (0xc0005c8000) (1) Data frame sent\nI0402 22:07:27.846130 2754 log.go:172] (0xc00061e6e0) (0xc0005c8000) Stream removed, broadcasting: 1\nI0402 22:07:27.846348 2754 log.go:172] (0xc00061e6e0) Go away received\nI0402 22:07:27.846579 2754 log.go:172] (0xc00061e6e0) (0xc0005c8000) Stream removed, broadcasting: 1\nI0402 22:07:27.846599 2754 log.go:172] (0xc00061e6e0) (0xc0005c8140) Stream removed, broadcasting: 3\nI0402 22:07:27.846613 2754 log.go:172] (0xc00061e6e0) (0xc0005c81e0) Stream removed, broadcasting: 5\n" Apr 2 22:07:27.851: INFO: stdout: "" Apr 2 22:07:27.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7449 execpodjv82x -- /bin/sh -x -c nc -zv -t -w 2 10.108.108.187 80' Apr 2 22:07:28.053: INFO: stderr: "I0402 22:07:27.977684 2774 log.go:172] (0xc00099e160) (0xc0005b7d60) Create stream\nI0402 22:07:27.977749 2774 log.go:172] (0xc00099e160) (0xc0005b7d60) Stream added, broadcasting: 1\nI0402 22:07:27.980945 2774 log.go:172] (0xc00099e160) Reply frame received for 1\nI0402 22:07:27.980987 2774 log.go:172] (0xc00099e160) (0xc0005b7e00) Create stream\nI0402 22:07:27.981001 2774 log.go:172] (0xc00099e160) (0xc0005b7e00) Stream added, broadcasting: 3\nI0402 22:07:27.982183 2774 log.go:172] (0xc00099e160) Reply frame received for 3\nI0402 22:07:27.982232 2774 log.go:172] (0xc00099e160) (0xc0001fa820) Create stream\nI0402 22:07:27.982247 2774 log.go:172] (0xc00099e160) (0xc0001fa820) Stream added, broadcasting: 5\nI0402 22:07:27.983130 2774 log.go:172] (0xc00099e160) Reply frame received for 5\nI0402 22:07:28.046092 2774 log.go:172] (0xc00099e160) Data frame received for 5\nI0402 22:07:28.046137 2774 log.go:172] (0xc0001fa820) (5) Data frame handling\nI0402 22:07:28.046181 2774 log.go:172] (0xc0001fa820) (5) Data frame sent\nI0402 22:07:28.046207 2774 log.go:172] (0xc00099e160) Data frame received for 5\nI0402 22:07:28.046225 2774 log.go:172] (0xc0001fa820) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.108.187 80\nConnection to 10.108.108.187 80 port [tcp/http] succeeded!\nI0402 22:07:28.046255 2774 log.go:172] (0xc00099e160) Data frame received for 3\nI0402 22:07:28.046274 2774 log.go:172] (0xc0005b7e00) (3) Data frame handling\nI0402 22:07:28.047500 2774 log.go:172] (0xc00099e160) Data frame received for 1\nI0402 22:07:28.047525 2774 log.go:172] (0xc0005b7d60) (1) Data frame handling\nI0402 22:07:28.047549 2774 log.go:172] (0xc0005b7d60) (1) Data frame sent\nI0402 22:07:28.047566 2774 log.go:172] (0xc00099e160) (0xc0005b7d60) Stream removed, broadcasting: 1\nI0402 22:07:28.047587 2774 log.go:172] (0xc00099e160) Go away received\nI0402 22:07:28.048047 2774 log.go:172] (0xc00099e160) (0xc0005b7d60) Stream removed, broadcasting: 1\nI0402 22:07:28.048073 2774 log.go:172] (0xc00099e160) (0xc0005b7e00) Stream removed, broadcasting: 3\nI0402 22:07:28.048086 2774 log.go:172] (0xc00099e160) (0xc0001fa820) Stream removed, broadcasting: 5\n" Apr 2 22:07:28.053: INFO: stdout: "" Apr 2 22:07:28.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7449 execpodjv82x -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32766' Apr 2 22:07:28.271: INFO: stderr: "I0402 22:07:28.187233 2795 log.go:172] (0xc0000f49a0) (0xc000717c20) Create stream\nI0402 22:07:28.187286 2795 log.go:172] (0xc0000f49a0) (0xc000717c20) Stream added, broadcasting: 1\nI0402 22:07:28.196812 2795 log.go:172] (0xc0000f49a0) Reply frame received for 1\nI0402 22:07:28.196873 2795 log.go:172] (0xc0000f49a0) (0xc000938000) Create stream\nI0402 22:07:28.196888 2795 log.go:172] (0xc0000f49a0) (0xc000938000) Stream added, broadcasting: 3\nI0402 22:07:28.198977 2795 log.go:172] (0xc0000f49a0) Reply frame received for 3\nI0402 22:07:28.199025 2795 log.go:172] (0xc0000f49a0) (0xc0009380a0) Create stream\nI0402 22:07:28.199039 2795 log.go:172] (0xc0000f49a0) (0xc0009380a0) Stream added, broadcasting: 5\nI0402 22:07:28.200331 2795 log.go:172] (0xc0000f49a0) Reply frame received for 5\nI0402 22:07:28.266165 2795 log.go:172] (0xc0000f49a0) Data frame received for 3\nI0402 22:07:28.266202 2795 log.go:172] (0xc000938000) (3) Data frame handling\nI0402 22:07:28.266228 2795 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0402 22:07:28.266246 2795 log.go:172] (0xc0009380a0) (5) Data frame handling\nI0402 22:07:28.266257 2795 log.go:172] (0xc0009380a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32766\nConnection to 172.17.0.10 32766 port [tcp/32766] succeeded!\nI0402 22:07:28.266267 2795 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0402 22:07:28.266308 2795 log.go:172] (0xc0009380a0) (5) Data frame handling\nI0402 22:07:28.267289 2795 log.go:172] (0xc0000f49a0) Data frame received for 1\nI0402 22:07:28.267306 2795 log.go:172] (0xc000717c20) (1) Data frame handling\nI0402 22:07:28.267314 2795 log.go:172] (0xc000717c20) (1) Data frame sent\nI0402 22:07:28.267325 2795 log.go:172] (0xc0000f49a0) (0xc000717c20) Stream removed, broadcasting: 1\nI0402 22:07:28.267344 2795 log.go:172] (0xc0000f49a0) Go away received\nI0402 22:07:28.267670 2795 log.go:172] (0xc0000f49a0) (0xc000717c20) Stream removed, broadcasting: 1\nI0402 22:07:28.267682 2795 log.go:172] (0xc0000f49a0) (0xc000938000) Stream removed, broadcasting: 3\nI0402 22:07:28.267688 2795 log.go:172] (0xc0000f49a0) (0xc0009380a0) Stream removed, broadcasting: 5\n" Apr 2 22:07:28.272: INFO: stdout: "" Apr 2 22:07:28.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7449 execpodjv82x -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32766' Apr 2 22:07:28.477: INFO: stderr: "I0402 22:07:28.401101 2817 log.go:172] (0xc00091f080) (0xc00095e5a0) Create stream\nI0402 22:07:28.401287 2817 log.go:172] (0xc00091f080) (0xc00095e5a0) Stream added, broadcasting: 1\nI0402 22:07:28.406343 2817 log.go:172] (0xc00091f080) Reply frame received for 1\nI0402 22:07:28.406397 2817 log.go:172] (0xc00091f080) (0xc0005926e0) Create stream\nI0402 22:07:28.406414 2817 log.go:172] (0xc00091f080) (0xc0005926e0) Stream added, broadcasting: 3\nI0402 22:07:28.407505 2817 log.go:172] (0xc00091f080) Reply frame received for 3\nI0402 22:07:28.407564 2817 log.go:172] (0xc00091f080) (0xc0007574a0) Create stream\nI0402 22:07:28.407592 2817 log.go:172] (0xc00091f080) (0xc0007574a0) Stream added, broadcasting: 5\nI0402 22:07:28.408528 2817 log.go:172] (0xc00091f080) Reply frame received for 5\nI0402 22:07:28.470039 2817 log.go:172] (0xc00091f080) Data frame received for 3\nI0402 22:07:28.470059 2817 log.go:172] (0xc0005926e0) (3) Data frame handling\nI0402 22:07:28.470123 2817 log.go:172] (0xc00091f080) Data frame received for 5\nI0402 22:07:28.470160 2817 log.go:172] (0xc0007574a0) (5) Data frame handling\nI0402 22:07:28.470195 2817 log.go:172] (0xc0007574a0) (5) Data frame sent\nI0402 22:07:28.470212 2817 log.go:172] (0xc00091f080) Data frame received for 5\nI0402 22:07:28.470221 2817 log.go:172] (0xc0007574a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32766\nConnection to 172.17.0.8 32766 port [tcp/32766] succeeded!\nI0402 22:07:28.471589 2817 log.go:172] (0xc00091f080) Data frame received for 1\nI0402 22:07:28.471619 2817 log.go:172] (0xc00095e5a0) (1) Data frame handling\nI0402 22:07:28.471654 2817 log.go:172] (0xc00095e5a0) (1) Data frame sent\nI0402 22:07:28.471677 2817 log.go:172] (0xc00091f080) (0xc00095e5a0) Stream removed, broadcasting: 1\nI0402 22:07:28.471696 2817 log.go:172] (0xc00091f080) Go away received\nI0402 22:07:28.472179 2817 log.go:172] (0xc00091f080) (0xc00095e5a0) Stream removed, broadcasting: 1\nI0402 22:07:28.472216 2817 log.go:172] (0xc00091f080) (0xc0005926e0) Stream removed, broadcasting: 3\nI0402 22:07:28.472235 2817 log.go:172] (0xc00091f080) (0xc0007574a0) Stream removed, broadcasting: 5\n" Apr 2 22:07:28.477: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:07:28.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7449" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.108 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":218,"skipped":3683,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:07:28.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8699 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8699 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8699 Apr 2 22:07:28.607: INFO: Found 0 stateful pods, waiting for 1 Apr 2 22:07:38.622: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 2 22:07:38.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 22:07:38.891: INFO: stderr: "I0402 22:07:38.767273 2839 log.go:172] (0xc0000f4bb0) (0xc0006c1a40) Create stream\nI0402 22:07:38.767328 2839 log.go:172] (0xc0000f4bb0) (0xc0006c1a40) Stream added, broadcasting: 1\nI0402 22:07:38.770138 2839 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0402 22:07:38.770178 2839 log.go:172] (0xc0000f4bb0) (0xc0005d8000) Create stream\nI0402 22:07:38.770195 2839 log.go:172] (0xc0000f4bb0) (0xc0005d8000) Stream added, broadcasting: 3\nI0402 22:07:38.771356 2839 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0402 22:07:38.771420 2839 log.go:172] (0xc0000f4bb0) (0xc00002c000) Create stream\nI0402 22:07:38.771447 2839 log.go:172] (0xc0000f4bb0) (0xc00002c000) Stream added, broadcasting: 5\nI0402 22:07:38.772518 2839 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0402 22:07:38.859549 2839 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0402 22:07:38.859577 2839 log.go:172] (0xc00002c000) (5) Data frame handling\nI0402 22:07:38.859601 2839 log.go:172] (0xc00002c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 22:07:38.885240 2839 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0402 22:07:38.885265 2839 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0402 22:07:38.885279 2839 log.go:172] (0xc0005d8000) (3) Data frame sent\nI0402 22:07:38.885350 2839 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0402 22:07:38.885379 2839 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0402 22:07:38.885490 2839 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0402 22:07:38.885529 2839 log.go:172] (0xc00002c000) (5) Data frame handling\nI0402 22:07:38.887404 2839 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0402 22:07:38.887422 2839 log.go:172] (0xc0006c1a40) (1) Data frame handling\nI0402 22:07:38.887436 2839 log.go:172] (0xc0006c1a40) (1) Data frame sent\nI0402 22:07:38.887447 2839 log.go:172] (0xc0000f4bb0) (0xc0006c1a40) Stream removed, broadcasting: 1\nI0402 22:07:38.887456 2839 log.go:172] (0xc0000f4bb0) Go away received\nI0402 22:07:38.887799 2839 log.go:172] (0xc0000f4bb0) (0xc0006c1a40) Stream removed, broadcasting: 1\nI0402 22:07:38.887812 2839 log.go:172] (0xc0000f4bb0) (0xc0005d8000) Stream removed, broadcasting: 3\nI0402 22:07:38.887818 2839 log.go:172] (0xc0000f4bb0) (0xc00002c000) Stream removed, broadcasting: 5\n" Apr 2 22:07:38.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 22:07:38.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 22:07:38.895: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 2 22:07:48.900: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 22:07:48.900: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 22:07:48.932: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 22:07:48.932: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC }] Apr 2 22:07:48.932: INFO: Apr 2 22:07:48.932: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 2 22:07:49.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983038019s Apr 2 22:07:50.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978110177s Apr 2 22:07:51.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973618724s Apr 2 22:07:52.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.968890973s Apr 2 22:07:53.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.964324168s Apr 2 22:07:54.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95967251s Apr 2 22:07:55.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.951047109s Apr 2 22:07:56.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946764823s Apr 2 22:07:57.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.292186ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8699 Apr 2 22:07:58.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 22:07:59.230: INFO: stderr: "I0402 22:07:59.122177 2859 log.go:172] (0xc0009f4840) (0xc0009cc000) Create stream\nI0402 22:07:59.122260 2859 log.go:172] (0xc0009f4840) (0xc0009cc000) Stream added, broadcasting: 1\nI0402 22:07:59.125378 2859 log.go:172] (0xc0009f4840) Reply frame received for 1\nI0402 22:07:59.125413 2859 log.go:172] (0xc0009f4840) (0xc0006c5a40) Create stream\nI0402 22:07:59.125425 2859 log.go:172] (0xc0009f4840) (0xc0006c5a40) Stream added, broadcasting: 3\nI0402 22:07:59.126502 2859 log.go:172] (0xc0009f4840) Reply frame received for 3\nI0402 22:07:59.126551 2859 log.go:172] (0xc0009f4840) (0xc000228000) Create stream\nI0402 22:07:59.126567 2859 log.go:172] (0xc0009f4840) (0xc000228000) Stream added, broadcasting: 5\nI0402 22:07:59.127445 2859 log.go:172] (0xc0009f4840) Reply frame received for 5\nI0402 22:07:59.224647 2859 log.go:172] (0xc0009f4840) Data frame received for 5\nI0402 22:07:59.224672 2859 log.go:172] (0xc000228000) (5) Data frame handling\nI0402 22:07:59.224683 2859 log.go:172] (0xc000228000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 22:07:59.224694 2859 log.go:172] (0xc0009f4840) Data frame received for 3\nI0402 22:07:59.224700 2859 log.go:172] (0xc0006c5a40) (3) Data frame handling\nI0402 22:07:59.224711 2859 log.go:172] (0xc0006c5a40) (3) Data frame sent\nI0402 22:07:59.224718 2859 log.go:172] (0xc0009f4840) Data frame received for 3\nI0402 22:07:59.224725 2859 log.go:172] (0xc0006c5a40) (3) Data frame handling\nI0402 22:07:59.224859 2859 log.go:172] (0xc0009f4840) Data frame received for 5\nI0402 22:07:59.224875 2859 log.go:172] (0xc000228000) (5) Data frame handling\nI0402 22:07:59.225984 2859 log.go:172] (0xc0009f4840) Data frame received for 1\nI0402 22:07:59.226009 2859 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0402 22:07:59.226022 2859 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0402 22:07:59.226030 2859 log.go:172] (0xc0009f4840) (0xc0009cc000) Stream removed, broadcasting: 1\nI0402 22:07:59.226066 2859 log.go:172] (0xc0009f4840) Go away received\nI0402 22:07:59.226319 2859 log.go:172] (0xc0009f4840) (0xc0009cc000) Stream removed, broadcasting: 1\nI0402 22:07:59.226337 2859 log.go:172] (0xc0009f4840) (0xc0006c5a40) Stream removed, broadcasting: 3\nI0402 22:07:59.226349 2859 log.go:172] (0xc0009f4840) (0xc000228000) Stream removed, broadcasting: 5\n" Apr 2 22:07:59.230: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 22:07:59.230: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 22:07:59.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 22:07:59.427: INFO: stderr: "I0402 22:07:59.363728 2880 log.go:172] (0xc000a9eb00) (0xc000932000) Create stream\nI0402 22:07:59.363780 2880 log.go:172] (0xc000a9eb00) (0xc000932000) Stream added, broadcasting: 1\nI0402 22:07:59.366447 2880 log.go:172] (0xc000a9eb00) Reply frame received for 1\nI0402 22:07:59.366485 2880 log.go:172] (0xc000a9eb00) (0xc000aa2000) Create stream\nI0402 22:07:59.366493 2880 log.go:172] (0xc000a9eb00) (0xc000aa2000) Stream added, broadcasting: 3\nI0402 22:07:59.367242 2880 log.go:172] (0xc000a9eb00) Reply frame received for 3\nI0402 22:07:59.367278 2880 log.go:172] (0xc000a9eb00) (0xc0006e9ae0) Create stream\nI0402 22:07:59.367290 2880 log.go:172] (0xc000a9eb00) (0xc0006e9ae0) Stream added, broadcasting: 5\nI0402 22:07:59.368251 2880 log.go:172] (0xc000a9eb00) Reply frame received for 5\nI0402 22:07:59.420823 2880 log.go:172] (0xc000a9eb00) Data frame received for 5\nI0402 22:07:59.420849 2880 log.go:172] (0xc0006e9ae0) (5) Data frame handling\nI0402 22:07:59.420857 2880 log.go:172] (0xc0006e9ae0) (5) Data frame sent\nI0402 22:07:59.420866 2880 log.go:172] (0xc000a9eb00) Data frame received for 5\nI0402 22:07:59.420871 2880 log.go:172] (0xc0006e9ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0402 22:07:59.420890 2880 log.go:172] (0xc000a9eb00) Data frame received for 3\nI0402 22:07:59.420895 2880 log.go:172] (0xc000aa2000) (3) Data frame handling\nI0402 22:07:59.420902 2880 log.go:172] (0xc000aa2000) (3) Data frame sent\nI0402 22:07:59.420908 2880 log.go:172] (0xc000a9eb00) Data frame received for 3\nI0402 22:07:59.420913 2880 log.go:172] (0xc000aa2000) (3) Data frame handling\nI0402 22:07:59.422810 2880 log.go:172] (0xc000a9eb00) Data frame received for 1\nI0402 22:07:59.422842 2880 log.go:172] (0xc000932000) (1) Data frame handling\nI0402 22:07:59.422863 2880 log.go:172] (0xc000932000) (1) Data frame sent\nI0402 22:07:59.422881 2880 log.go:172] (0xc000a9eb00) (0xc000932000) Stream removed, broadcasting: 1\nI0402 22:07:59.422981 2880 log.go:172] (0xc000a9eb00) Go away received\nI0402 22:07:59.423230 2880 log.go:172] (0xc000a9eb00) (0xc000932000) Stream removed, broadcasting: 1\nI0402 22:07:59.423255 2880 log.go:172] (0xc000a9eb00) (0xc000aa2000) Stream removed, broadcasting: 3\nI0402 22:07:59.423265 2880 log.go:172] (0xc000a9eb00) (0xc0006e9ae0) Stream removed, broadcasting: 5\n" Apr 2 22:07:59.427: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 22:07:59.427: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 22:07:59.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 22:07:59.642: INFO: stderr: "I0402 22:07:59.559548 2902 log.go:172] (0xc0003c0fd0) (0xc000721a40) Create stream\nI0402 22:07:59.559601 2902 log.go:172] (0xc0003c0fd0) (0xc000721a40) Stream added, broadcasting: 1\nI0402 22:07:59.562250 2902 log.go:172] (0xc0003c0fd0) Reply frame received for 1\nI0402 22:07:59.562324 2902 log.go:172] (0xc0003c0fd0) (0xc000990000) Create stream\nI0402 22:07:59.562361 2902 log.go:172] (0xc0003c0fd0) (0xc000990000) Stream added, broadcasting: 3\nI0402 22:07:59.563452 2902 log.go:172] (0xc0003c0fd0) Reply frame received for 3\nI0402 22:07:59.563529 2902 log.go:172] (0xc0003c0fd0) (0xc00025e000) Create stream\nI0402 22:07:59.563563 2902 log.go:172] (0xc0003c0fd0) (0xc00025e000) Stream added, broadcasting: 5\nI0402 22:07:59.564608 2902 log.go:172] (0xc0003c0fd0) Reply frame received for 5\nI0402 22:07:59.634724 2902 log.go:172] (0xc0003c0fd0) Data frame received for 5\nI0402 22:07:59.634766 2902 log.go:172] (0xc0003c0fd0) Data frame received for 3\nI0402 22:07:59.634819 2902 log.go:172] (0xc000990000) (3) Data frame handling\nI0402 22:07:59.634838 2902 log.go:172] (0xc00025e000) (5) Data frame handling\nI0402 22:07:59.634858 2902 log.go:172] (0xc00025e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0402 22:07:59.634872 2902 log.go:172] (0xc000990000) (3) Data frame sent\nI0402 22:07:59.634891 2902 log.go:172] (0xc0003c0fd0) Data frame received for 3\nI0402 22:07:59.634898 2902 log.go:172] (0xc000990000) (3) Data frame handling\nI0402 22:07:59.634913 2902 log.go:172] (0xc0003c0fd0) Data frame received for 5\nI0402 22:07:59.634920 2902 log.go:172] (0xc00025e000) (5) Data frame handling\nI0402 22:07:59.636589 2902 log.go:172] (0xc0003c0fd0) Data frame received for 1\nI0402 22:07:59.636611 2902 log.go:172] (0xc000721a40) (1) Data frame handling\nI0402 22:07:59.636634 2902 log.go:172] (0xc000721a40) (1) Data frame sent\nI0402 22:07:59.636676 2902 log.go:172] (0xc0003c0fd0) (0xc000721a40) Stream removed, broadcasting: 1\nI0402 22:07:59.636761 2902 log.go:172] (0xc0003c0fd0) Go away received\nI0402 22:07:59.636965 2902 log.go:172] (0xc0003c0fd0) (0xc000721a40) Stream removed, broadcasting: 1\nI0402 22:07:59.636978 2902 log.go:172] (0xc0003c0fd0) (0xc000990000) Stream removed, broadcasting: 3\nI0402 22:07:59.636985 2902 log.go:172] (0xc0003c0fd0) (0xc00025e000) Stream removed, broadcasting: 5\n" Apr 2 22:07:59.642: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 22:07:59.642: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 22:07:59.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 2 22:08:09.651: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:08:09.651: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:08:09.651: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 2 22:08:09.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 22:08:09.901: INFO: stderr: "I0402 22:08:09.787266 2925 log.go:172] (0xc000a61810) (0xc000a38780) Create stream\nI0402 22:08:09.787329 2925 log.go:172] (0xc000a61810) (0xc000a38780) Stream added, broadcasting: 1\nI0402 22:08:09.791374 2925 log.go:172] (0xc000a61810) Reply frame received for 1\nI0402 22:08:09.791415 2925 log.go:172] (0xc000a61810) (0xc0005d26e0) Create stream\nI0402 22:08:09.791428 2925 log.go:172] (0xc000a61810) (0xc0005d26e0) Stream added, broadcasting: 3\nI0402 22:08:09.792283 2925 log.go:172] (0xc000a61810) Reply frame received for 3\nI0402 22:08:09.792313 2925 log.go:172] (0xc000a61810) (0xc0001974a0) Create stream\nI0402 22:08:09.792322 2925 log.go:172] (0xc000a61810) (0xc0001974a0) Stream added, broadcasting: 5\nI0402 22:08:09.793554 2925 log.go:172] (0xc000a61810) Reply frame received for 5\nI0402 22:08:09.887054 2925 log.go:172] (0xc000a61810) Data frame received for 5\nI0402 22:08:09.887074 2925 log.go:172] (0xc0001974a0) (5) Data frame handling\nI0402 22:08:09.887082 2925 log.go:172] (0xc0001974a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 22:08:09.893607 2925 log.go:172] (0xc000a61810) Data frame received for 3\nI0402 22:08:09.893639 2925 log.go:172] (0xc0005d26e0) (3) Data frame handling\nI0402 22:08:09.893672 2925 log.go:172] (0xc0005d26e0) (3) Data frame sent\nI0402 22:08:09.894017 2925 log.go:172] (0xc000a61810) Data frame received for 3\nI0402 22:08:09.894072 2925 log.go:172] (0xc0005d26e0) (3) Data frame handling\nI0402 22:08:09.894109 2925 log.go:172] (0xc000a61810) Data frame received for 5\nI0402 22:08:09.894129 2925 log.go:172] (0xc0001974a0) (5) Data frame handling\nI0402 22:08:09.895611 2925 log.go:172] (0xc000a61810) Data frame received for 1\nI0402 22:08:09.895640 2925 log.go:172] (0xc000a38780) (1) Data frame handling\nI0402 22:08:09.895664 2925 log.go:172] (0xc000a38780) (1) Data frame sent\nI0402 22:08:09.895701 2925 log.go:172] (0xc000a61810) (0xc000a38780) Stream removed, broadcasting: 1\nI0402 22:08:09.895743 2925 log.go:172] (0xc000a61810) Go away received\nI0402 22:08:09.896054 2925 log.go:172] (0xc000a61810) (0xc000a38780) Stream removed, broadcasting: 1\nI0402 22:08:09.896077 2925 log.go:172] (0xc000a61810) (0xc0005d26e0) Stream removed, broadcasting: 3\nI0402 22:08:09.896095 2925 log.go:172] (0xc000a61810) (0xc0001974a0) Stream removed, broadcasting: 5\n" Apr 2 22:08:09.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 22:08:09.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 22:08:09.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 22:08:10.161: INFO: stderr: "I0402 22:08:10.034107 2948 log.go:172] (0xc000102f20) (0xc00070a320) Create stream\nI0402 22:08:10.034176 2948 log.go:172] (0xc000102f20) (0xc00070a320) Stream added, broadcasting: 1\nI0402 22:08:10.036700 2948 log.go:172] (0xc000102f20) Reply frame received for 1\nI0402 22:08:10.036761 2948 log.go:172] (0xc000102f20) (0xc0003a85a0) Create stream\nI0402 22:08:10.036783 2948 log.go:172] (0xc000102f20) (0xc0003a85a0) Stream added, broadcasting: 3\nI0402 22:08:10.038391 2948 log.go:172] (0xc000102f20) Reply frame received for 3\nI0402 22:08:10.038440 2948 log.go:172] (0xc000102f20) (0xc00070a3c0) Create stream\nI0402 22:08:10.038453 2948 log.go:172] (0xc000102f20) (0xc00070a3c0) Stream added, broadcasting: 5\nI0402 22:08:10.039629 2948 log.go:172] (0xc000102f20) Reply frame received for 5\nI0402 22:08:10.110281 2948 log.go:172] (0xc000102f20) Data frame received for 5\nI0402 22:08:10.110320 2948 log.go:172] (0xc00070a3c0) (5) Data frame handling\nI0402 22:08:10.110349 2948 log.go:172] (0xc00070a3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 22:08:10.155313 2948 log.go:172] (0xc000102f20) Data frame received for 5\nI0402 22:08:10.155338 2948 log.go:172] (0xc00070a3c0) (5) Data frame handling\nI0402 22:08:10.155381 2948 log.go:172] (0xc000102f20) Data frame received for 3\nI0402 22:08:10.155433 2948 log.go:172] (0xc0003a85a0) (3) Data frame handling\nI0402 22:08:10.155467 2948 log.go:172] (0xc0003a85a0) (3) Data frame sent\nI0402 22:08:10.155489 2948 log.go:172] (0xc000102f20) Data frame received for 3\nI0402 22:08:10.155507 2948 log.go:172] (0xc0003a85a0) (3) Data frame handling\nI0402 22:08:10.157041 2948 log.go:172] (0xc000102f20) Data frame received for 1\nI0402 22:08:10.157055 2948 log.go:172] (0xc00070a320) (1) Data frame handling\nI0402 22:08:10.157066 2948 log.go:172] (0xc00070a320) (1) Data frame sent\nI0402 22:08:10.157078 2948 log.go:172] (0xc000102f20) (0xc00070a320) Stream removed, broadcasting: 1\nI0402 22:08:10.157409 2948 log.go:172] (0xc000102f20) (0xc00070a320) Stream removed, broadcasting: 1\nI0402 22:08:10.157421 2948 log.go:172] (0xc000102f20) (0xc0003a85a0) Stream removed, broadcasting: 3\nI0402 22:08:10.157471 2948 log.go:172] (0xc000102f20) Go away received\nI0402 22:08:10.157518 2948 log.go:172] (0xc000102f20) (0xc00070a3c0) Stream removed, broadcasting: 5\n" Apr 2 22:08:10.161: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 22:08:10.161: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 22:08:10.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8699 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 22:08:10.422: INFO: stderr: "I0402 22:08:10.306751 2970 log.go:172] (0xc00034ee70) (0xc0006dd9a0) Create stream\nI0402 22:08:10.306869 2970 log.go:172] (0xc00034ee70) (0xc0006dd9a0) Stream added, broadcasting: 1\nI0402 22:08:10.309930 2970 log.go:172] (0xc00034ee70) Reply frame received for 1\nI0402 22:08:10.310005 2970 log.go:172] (0xc00034ee70) (0xc000ae8000) Create stream\nI0402 22:08:10.310041 2970 log.go:172] (0xc00034ee70) (0xc000ae8000) Stream added, broadcasting: 3\nI0402 22:08:10.311029 2970 log.go:172] (0xc00034ee70) Reply frame received for 3\nI0402 22:08:10.311083 2970 log.go:172] (0xc00034ee70) (0xc000296000) Create stream\nI0402 22:08:10.311100 2970 log.go:172] (0xc00034ee70) (0xc000296000) Stream added, broadcasting: 5\nI0402 22:08:10.312220 2970 log.go:172] (0xc00034ee70) Reply frame received for 5\nI0402 22:08:10.374281 2970 log.go:172] (0xc00034ee70) Data frame received for 5\nI0402 22:08:10.374324 2970 log.go:172] (0xc000296000) (5) Data frame handling\nI0402 22:08:10.374356 2970 log.go:172] (0xc000296000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 22:08:10.413497 2970 log.go:172] (0xc00034ee70) Data frame received for 5\nI0402 22:08:10.413554 2970 log.go:172] (0xc000296000) (5) Data frame handling\nI0402 22:08:10.413595 2970 log.go:172] (0xc00034ee70) Data frame received for 3\nI0402 22:08:10.413680 2970 log.go:172] (0xc000ae8000) (3) Data frame handling\nI0402 22:08:10.413739 2970 log.go:172] (0xc000ae8000) (3) Data frame sent\nI0402 22:08:10.413781 2970 log.go:172] (0xc00034ee70) Data frame received for 3\nI0402 22:08:10.413800 2970 log.go:172] (0xc000ae8000) (3) Data frame handling\nI0402 22:08:10.415826 2970 log.go:172] (0xc00034ee70) Data frame received for 1\nI0402 22:08:10.415848 2970 log.go:172] (0xc0006dd9a0) (1) Data frame handling\nI0402 22:08:10.415862 2970 log.go:172] (0xc0006dd9a0) (1) Data frame sent\nI0402 22:08:10.415879 2970 log.go:172] (0xc00034ee70) (0xc0006dd9a0) Stream removed, broadcasting: 1\nI0402 22:08:10.416294 2970 log.go:172] (0xc00034ee70) (0xc0006dd9a0) Stream removed, broadcasting: 1\nI0402 22:08:10.416316 2970 log.go:172] (0xc00034ee70) (0xc000ae8000) Stream removed, broadcasting: 3\nI0402 22:08:10.416488 2970 log.go:172] (0xc00034ee70) (0xc000296000) Stream removed, broadcasting: 5\n" Apr 2 22:08:10.423: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 22:08:10.423: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 22:08:10.423: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 22:08:10.426: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 2 22:08:20.433: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 22:08:20.433: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 2 22:08:20.433: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 2 22:08:20.453: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 22:08:20.453: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC }] Apr 2 22:08:20.453: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:20.453: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:20.453: INFO: Apr 2 22:08:20.453: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 22:08:21.533: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 22:08:21.533: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC }] Apr 2 22:08:21.533: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:21.533: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:21.533: INFO: Apr 2 22:08:21.533: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 22:08:22.537: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 22:08:22.537: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:28 +0000 UTC }] Apr 2 22:08:22.537: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:22.537: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:22.537: INFO: Apr 2 22:08:22.537: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 22:08:23.557: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 22:08:23.557: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 22:07:48 +0000 UTC }] Apr 2 22:08:23.557: INFO: Apr 2 22:08:23.557: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 2 22:08:24.561: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.88198279s Apr 2 22:08:25.567: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.877486648s Apr 2 22:08:26.571: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.871774972s Apr 2 22:08:27.575: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.86797424s Apr 2 22:08:28.579: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.863641854s Apr 2 22:08:29.583: INFO: Verifying statefulset ss doesn't scale past 0 for another 859.916479ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8699 Apr 2 22:08:30.588: INFO: Scaling statefulset ss to 0 Apr 2 22:08:30.598: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 22:08:30.600: INFO: Deleting all statefulset in ns statefulset-8699 Apr 2 22:08:30.602: INFO: Scaling statefulset ss to 0 Apr 2 22:08:30.613: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 22:08:30.615: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:08:30.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8699" for this suite. • [SLOW TEST:62.127 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":219,"skipped":3693,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:08:30.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2794 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 22:08:30.699: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 2 22:08:54.817: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.38:8080/dial?request=hostname&protocol=udp&host=10.244.1.226&port=8081&tries=1'] Namespace:pod-network-test-2794 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:08:54.817: INFO: >>> kubeConfig: /root/.kube/config I0402 22:08:54.846465 6 log.go:172] (0xc00216e370) (0xc001750640) Create stream I0402 22:08:54.846505 6 log.go:172] (0xc00216e370) (0xc001750640) Stream added, broadcasting: 1 I0402 22:08:54.849742 6 log.go:172] (0xc00216e370) Reply frame received for 1 I0402 22:08:54.849779 6 log.go:172] (0xc00216e370) (0xc002328000) Create stream I0402 22:08:54.849793 6 log.go:172] (0xc00216e370) (0xc002328000) Stream added, broadcasting: 3 I0402 22:08:54.850825 6 log.go:172] (0xc00216e370) Reply frame received for 3 I0402 22:08:54.850880 6 log.go:172] (0xc00216e370) (0xc0017508c0) Create stream I0402 22:08:54.850897 6 log.go:172] (0xc00216e370) (0xc0017508c0) Stream added, broadcasting: 5 I0402 22:08:54.851950 6 log.go:172] (0xc00216e370) Reply frame received for 5 I0402 22:08:54.928238 6 log.go:172] (0xc00216e370) Data frame received for 3 I0402 22:08:54.928267 6 log.go:172] (0xc002328000) (3) Data frame handling I0402 22:08:54.928281 6 log.go:172] (0xc002328000) (3) Data frame sent I0402 22:08:54.929296 6 log.go:172] (0xc00216e370) Data frame received for 5 I0402 22:08:54.929309 6 log.go:172] (0xc0017508c0) (5) Data frame handling I0402 22:08:54.929597 6 log.go:172] (0xc00216e370) Data frame received for 3 I0402 22:08:54.929610 6 log.go:172] (0xc002328000) (3) Data frame handling I0402 22:08:54.931469 6 log.go:172] (0xc00216e370) Data frame received for 1 I0402 22:08:54.931487 6 log.go:172] (0xc001750640) (1) Data frame handling I0402 22:08:54.931494 6 log.go:172] (0xc001750640) (1) Data frame sent I0402 22:08:54.931502 6 log.go:172] (0xc00216e370) (0xc001750640) Stream removed, broadcasting: 1 I0402 22:08:54.931551 6 log.go:172] (0xc00216e370) Go away received I0402 22:08:54.931579 6 log.go:172] (0xc00216e370) (0xc001750640) Stream removed, broadcasting: 1 I0402 22:08:54.931588 6 log.go:172] (0xc00216e370) (0xc002328000) Stream removed, broadcasting: 3 I0402 22:08:54.931599 6 log.go:172] (0xc00216e370) (0xc0017508c0) Stream removed, broadcasting: 5 Apr 2 22:08:54.931: INFO: Waiting for responses: map[] Apr 2 22:08:54.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.38:8080/dial?request=hostname&protocol=udp&host=10.244.2.37&port=8081&tries=1'] Namespace:pod-network-test-2794 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:08:54.935: INFO: >>> kubeConfig: /root/.kube/config I0402 22:08:54.968479 6 log.go:172] (0xc0025f6000) (0xc002328500) Create stream I0402 22:08:54.968509 6 log.go:172] (0xc0025f6000) (0xc002328500) Stream added, broadcasting: 1 I0402 22:08:54.971570 6 log.go:172] (0xc0025f6000) Reply frame received for 1 I0402 22:08:54.971608 6 log.go:172] (0xc0025f6000) (0xc001a83c20) Create stream I0402 22:08:54.971622 6 log.go:172] (0xc0025f6000) (0xc001a83c20) Stream added, broadcasting: 3 I0402 22:08:54.972761 6 log.go:172] (0xc0025f6000) Reply frame received for 3 I0402 22:08:54.972804 6 log.go:172] (0xc0025f6000) (0xc0016eff40) Create stream I0402 22:08:54.972822 6 log.go:172] (0xc0025f6000) (0xc0016eff40) Stream added, broadcasting: 5 I0402 22:08:54.974198 6 log.go:172] (0xc0025f6000) Reply frame received for 5 I0402 22:08:55.055655 6 log.go:172] (0xc0025f6000) Data frame received for 3 I0402 22:08:55.055685 6 log.go:172] (0xc001a83c20) (3) Data frame handling I0402 22:08:55.055704 6 log.go:172] (0xc001a83c20) (3) Data frame sent I0402 22:08:55.056537 6 log.go:172] (0xc0025f6000) Data frame received for 5 I0402 22:08:55.056550 6 log.go:172] (0xc0016eff40) (5) Data frame handling I0402 22:08:55.056576 6 log.go:172] (0xc0025f6000) Data frame received for 3 I0402 22:08:55.056603 6 log.go:172] (0xc001a83c20) (3) Data frame handling I0402 22:08:55.058517 6 log.go:172] (0xc0025f6000) Data frame received for 1 I0402 22:08:55.058535 6 log.go:172] (0xc002328500) (1) Data frame handling I0402 22:08:55.058542 6 log.go:172] (0xc002328500) (1) Data frame sent I0402 22:08:55.058552 6 log.go:172] (0xc0025f6000) (0xc002328500) Stream removed, broadcasting: 1 I0402 22:08:55.058628 6 log.go:172] (0xc0025f6000) (0xc002328500) Stream removed, broadcasting: 1 I0402 22:08:55.058647 6 log.go:172] (0xc0025f6000) (0xc001a83c20) Stream removed, broadcasting: 3 I0402 22:08:55.058815 6 log.go:172] (0xc0025f6000) Go away received I0402 22:08:55.058839 6 log.go:172] (0xc0025f6000) (0xc0016eff40) Stream removed, broadcasting: 5 Apr 2 22:08:55.058: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:08:55.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2794" for this suite. • [SLOW TEST:24.414 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3697,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:08:55.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9552 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 22:08:55.119: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 2 22:09:19.259: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.227 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9552 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:09:19.259: INFO: >>> kubeConfig: /root/.kube/config I0402 22:09:19.295651 6 log.go:172] (0xc001a84630) (0xc001411680) Create stream I0402 22:09:19.295687 6 log.go:172] (0xc001a84630) (0xc001411680) Stream added, broadcasting: 1 I0402 22:09:19.298523 6 log.go:172] (0xc001a84630) Reply frame received for 1 I0402 22:09:19.298567 6 log.go:172] (0xc001a84630) (0xc00239ba40) Create stream I0402 22:09:19.298588 6 log.go:172] (0xc001a84630) (0xc00239ba40) Stream added, broadcasting: 3 I0402 22:09:19.299779 6 log.go:172] (0xc001a84630) Reply frame received for 3 I0402 22:09:19.299992 6 log.go:172] (0xc001a84630) (0xc001411a40) Create stream I0402 22:09:19.300020 6 log.go:172] (0xc001a84630) (0xc001411a40) Stream added, broadcasting: 5 I0402 22:09:19.301012 6 log.go:172] (0xc001a84630) Reply frame received for 5 I0402 22:09:20.381685 6 log.go:172] (0xc001a84630) Data frame received for 3 I0402 22:09:20.381707 6 log.go:172] (0xc00239ba40) (3) Data frame handling I0402 22:09:20.381719 6 log.go:172] (0xc00239ba40) (3) Data frame sent I0402 22:09:20.381723 6 log.go:172] (0xc001a84630) Data frame received for 3 I0402 22:09:20.381727 6 log.go:172] (0xc00239ba40) (3) Data frame handling I0402 22:09:20.381880 6 log.go:172] (0xc001a84630) Data frame received for 5 I0402 22:09:20.381905 6 log.go:172] (0xc001411a40) (5) Data frame handling I0402 22:09:20.383764 6 log.go:172] (0xc001a84630) Data frame received for 1 I0402 22:09:20.383780 6 log.go:172] (0xc001411680) (1) Data frame handling I0402 22:09:20.383798 6 log.go:172] (0xc001411680) (1) Data frame sent I0402 22:09:20.383812 6 log.go:172] (0xc001a84630) (0xc001411680) Stream removed, broadcasting: 1 I0402 22:09:20.383823 6 log.go:172] (0xc001a84630) Go away received I0402 22:09:20.383952 6 log.go:172] (0xc001a84630) (0xc001411680) Stream removed, broadcasting: 1 I0402 22:09:20.383973 6 log.go:172] (0xc001a84630) (0xc00239ba40) Stream removed, broadcasting: 3 I0402 22:09:20.383984 6 log.go:172] (0xc001a84630) (0xc001411a40) Stream removed, broadcasting: 5 Apr 2 22:09:20.384: INFO: Found all expected endpoints: [netserver-0] Apr 2 22:09:20.387: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.39 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9552 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:09:20.387: INFO: >>> kubeConfig: /root/.kube/config I0402 22:09:20.422520 6 log.go:172] (0xc002354420) (0xc001e66500) Create stream I0402 22:09:20.422563 6 log.go:172] (0xc002354420) (0xc001e66500) Stream added, broadcasting: 1 I0402 22:09:20.424864 6 log.go:172] (0xc002354420) Reply frame received for 1 I0402 22:09:20.424912 6 log.go:172] (0xc002354420) (0xc001e66820) Create stream I0402 22:09:20.424931 6 log.go:172] (0xc002354420) (0xc001e66820) Stream added, broadcasting: 3 I0402 22:09:20.425986 6 log.go:172] (0xc002354420) Reply frame received for 3 I0402 22:09:20.426034 6 log.go:172] (0xc002354420) (0xc000fcaaa0) Create stream I0402 22:09:20.426049 6 log.go:172] (0xc002354420) (0xc000fcaaa0) Stream added, broadcasting: 5 I0402 22:09:20.426838 6 log.go:172] (0xc002354420) Reply frame received for 5 I0402 22:09:21.497073 6 log.go:172] (0xc002354420) Data frame received for 3 I0402 22:09:21.497251 6 log.go:172] (0xc001e66820) (3) Data frame handling I0402 22:09:21.497280 6 log.go:172] (0xc001e66820) (3) Data frame sent I0402 22:09:21.497375 6 log.go:172] (0xc002354420) Data frame received for 5 I0402 22:09:21.497387 6 log.go:172] (0xc000fcaaa0) (5) Data frame handling I0402 22:09:21.497421 6 log.go:172] (0xc002354420) Data frame received for 3 I0402 22:09:21.497437 6 log.go:172] (0xc001e66820) (3) Data frame handling I0402 22:09:21.499423 6 log.go:172] (0xc002354420) Data frame received for 1 I0402 22:09:21.499458 6 log.go:172] (0xc001e66500) (1) Data frame handling I0402 22:09:21.499477 6 log.go:172] (0xc001e66500) (1) Data frame sent I0402 22:09:21.499501 6 log.go:172] (0xc002354420) (0xc001e66500) Stream removed, broadcasting: 1 I0402 22:09:21.499526 6 log.go:172] (0xc002354420) Go away received I0402 22:09:21.499634 6 log.go:172] (0xc002354420) (0xc001e66500) Stream removed, broadcasting: 1 I0402 22:09:21.499663 6 log.go:172] (0xc002354420) (0xc001e66820) Stream removed, broadcasting: 3 I0402 22:09:21.499676 6 log.go:172] (0xc002354420) (0xc000fcaaa0) Stream removed, broadcasting: 5 Apr 2 22:09:21.499: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:09:21.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9552" for this suite. • [SLOW TEST:26.442 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3703,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:09:21.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 22:09:21.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 22:09:24.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462162, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462162, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462162, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462161, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:09:27.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:09:27.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4374" for this suite. STEP: Destroying namespace "webhook-4374-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.991 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":222,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:09:27.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0402 22:10:07.717480 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 22:10:07.717: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:10:07.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2711" for this suite. • [SLOW TEST:40.225 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":223,"skipped":3727,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:10:07.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 2 22:10:07.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4561' Apr 2 22:10:08.064: INFO: stderr: "" Apr 2 22:10:08.064: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 22:10:08.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:08.194: INFO: stderr: "" Apr 2 22:10:08.194: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-g4l7m " Apr 2 22:10:08.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:08.291: INFO: stderr: "" Apr 2 22:10:08.291: INFO: stdout: "" Apr 2 22:10:08.291: INFO: update-demo-nautilus-cgv78 is created but not running Apr 2 22:10:13.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:13.381: INFO: stderr: "" Apr 2 22:10:13.381: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-g4l7m " Apr 2 22:10:13.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:13.620: INFO: stderr: "" Apr 2 22:10:13.620: INFO: stdout: "true" Apr 2 22:10:13.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:13.775: INFO: stderr: "" Apr 2 22:10:13.775: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:13.775: INFO: validating pod update-demo-nautilus-cgv78 Apr 2 22:10:14.039: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:14.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:14.039: INFO: update-demo-nautilus-cgv78 is verified up and running Apr 2 22:10:14.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4l7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:14.285: INFO: stderr: "" Apr 2 22:10:14.285: INFO: stdout: "true" Apr 2 22:10:14.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4l7m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:14.433: INFO: stderr: "" Apr 2 22:10:14.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:14.433: INFO: validating pod update-demo-nautilus-g4l7m Apr 2 22:10:14.444: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:14.444: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:14.444: INFO: update-demo-nautilus-g4l7m is verified up and running STEP: scaling down the replication controller Apr 2 22:10:14.447: INFO: scanned /root for discovery docs: Apr 2 22:10:14.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4561' Apr 2 22:10:15.831: INFO: stderr: "" Apr 2 22:10:15.831: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 22:10:15.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:16.042: INFO: stderr: "" Apr 2 22:10:16.042: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-g4l7m " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 2 22:10:21.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:21.147: INFO: stderr: "" Apr 2 22:10:21.147: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-g4l7m " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 2 22:10:26.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:26.246: INFO: stderr: "" Apr 2 22:10:26.246: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-g4l7m " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 2 22:10:31.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:31.350: INFO: stderr: "" Apr 2 22:10:31.350: INFO: stdout: "update-demo-nautilus-cgv78 " Apr 2 22:10:31.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:31.438: INFO: stderr: "" Apr 2 22:10:31.439: INFO: stdout: "true" Apr 2 22:10:31.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:31.523: INFO: stderr: "" Apr 2 22:10:31.523: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:31.523: INFO: validating pod update-demo-nautilus-cgv78 Apr 2 22:10:31.526: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:31.526: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:31.526: INFO: update-demo-nautilus-cgv78 is verified up and running STEP: scaling up the replication controller Apr 2 22:10:31.528: INFO: scanned /root for discovery docs: Apr 2 22:10:31.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4561' Apr 2 22:10:32.646: INFO: stderr: "" Apr 2 22:10:32.646: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 22:10:32.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:32.747: INFO: stderr: "" Apr 2 22:10:32.747: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-h2xkz " Apr 2 22:10:32.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:32.836: INFO: stderr: "" Apr 2 22:10:32.836: INFO: stdout: "true" Apr 2 22:10:32.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:32.924: INFO: stderr: "" Apr 2 22:10:32.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:32.924: INFO: validating pod update-demo-nautilus-cgv78 Apr 2 22:10:32.927: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:32.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:32.927: INFO: update-demo-nautilus-cgv78 is verified up and running Apr 2 22:10:32.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2xkz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:33.014: INFO: stderr: "" Apr 2 22:10:33.014: INFO: stdout: "" Apr 2 22:10:33.014: INFO: update-demo-nautilus-h2xkz is created but not running Apr 2 22:10:38.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4561' Apr 2 22:10:38.119: INFO: stderr: "" Apr 2 22:10:38.119: INFO: stdout: "update-demo-nautilus-cgv78 update-demo-nautilus-h2xkz " Apr 2 22:10:38.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:38.214: INFO: stderr: "" Apr 2 22:10:38.214: INFO: stdout: "true" Apr 2 22:10:38.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgv78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:38.307: INFO: stderr: "" Apr 2 22:10:38.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:38.307: INFO: validating pod update-demo-nautilus-cgv78 Apr 2 22:10:38.311: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:38.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:38.311: INFO: update-demo-nautilus-cgv78 is verified up and running Apr 2 22:10:38.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2xkz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:38.413: INFO: stderr: "" Apr 2 22:10:38.413: INFO: stdout: "true" Apr 2 22:10:38.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2xkz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4561' Apr 2 22:10:38.509: INFO: stderr: "" Apr 2 22:10:38.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 22:10:38.509: INFO: validating pod update-demo-nautilus-h2xkz Apr 2 22:10:38.513: INFO: got data: { "image": "nautilus.jpg" } Apr 2 22:10:38.513: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 22:10:38.513: INFO: update-demo-nautilus-h2xkz is verified up and running STEP: using delete to clean up resources Apr 2 22:10:38.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4561' Apr 2 22:10:38.628: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 22:10:38.628: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 2 22:10:38.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4561' Apr 2 22:10:38.724: INFO: stderr: "No resources found in kubectl-4561 namespace.\n" Apr 2 22:10:38.724: INFO: stdout: "" Apr 2 22:10:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 22:10:38.828: INFO: stderr: "" Apr 2 22:10:38.828: INFO: stdout: "update-demo-nautilus-cgv78\nupdate-demo-nautilus-h2xkz\n" Apr 2 22:10:39.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4561' Apr 2 22:10:39.446: INFO: stderr: "No resources found in kubectl-4561 namespace.\n" Apr 2 22:10:39.446: INFO: stdout: "" Apr 2 22:10:39.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 22:10:39.552: INFO: stderr: "" Apr 2 22:10:39.552: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:10:39.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4561" for this suite. • [SLOW TEST:31.832 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":224,"skipped":3743,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:10:39.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ae48dbef-126a-465d-90f3-4b4e21b14042 STEP: Creating a pod to test consume secrets Apr 2 22:10:39.836: INFO: Waiting up to 5m0s for pod "pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4" in namespace "secrets-9724" to be "success or failure" Apr 2 22:10:39.852: INFO: Pod "pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.991253ms Apr 2 22:10:41.882: INFO: Pod "pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046364735s Apr 2 22:10:43.886: INFO: Pod "pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05015911s STEP: Saw pod success Apr 2 22:10:43.886: INFO: Pod "pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4" satisfied condition "success or failure" Apr 2 22:10:43.888: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4 container secret-volume-test: STEP: delete the pod Apr 2 22:10:44.051: INFO: Waiting for pod pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4 to disappear Apr 2 22:10:44.061: INFO: Pod pod-secrets-a6acab1e-f709-4f48-8d78-ec929d55b6d4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:10:44.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9724" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3743,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:10:44.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 2 22:10:44.125: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:11:00.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4823" for this suite. • [SLOW TEST:16.582 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":226,"skipped":3759,"failed":0} [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:11:00.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:11:04.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7260" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3759,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:11:04.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:11:04.842: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c3d0395f-6c95-427f-a3de-cde419987ccb" in namespace "security-context-test-6040" to be "success or failure" Apr 2 22:11:04.846: INFO: Pod "busybox-readonly-false-c3d0395f-6c95-427f-a3de-cde419987ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.904656ms Apr 2 22:11:06.850: INFO: Pod "busybox-readonly-false-c3d0395f-6c95-427f-a3de-cde419987ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007751371s Apr 2 22:11:08.853: INFO: Pod "busybox-readonly-false-c3d0395f-6c95-427f-a3de-cde419987ccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011418573s Apr 2 22:11:08.853: INFO: Pod "busybox-readonly-false-c3d0395f-6c95-427f-a3de-cde419987ccb" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:11:08.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6040" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3766,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:11:08.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:11:08.958: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:11:09.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8138" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":229,"skipped":3778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:11:09.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:11:09.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 2 22:11:10.413: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:10Z generation:1 name:name1 resourceVersion:4865654 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f206f038-22a3-4eb2-a8b1-6f73bf4a1fcd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 2 22:11:20.419: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:20Z generation:1 name:name2 resourceVersion:4865698 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:79ceb6df-64d9-4e2b-bc84-e77a804a13db] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 2 22:11:30.425: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:10Z generation:2 name:name1 resourceVersion:4865728 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f206f038-22a3-4eb2-a8b1-6f73bf4a1fcd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 2 22:11:40.431: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:20Z generation:2 name:name2 resourceVersion:4865758 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:79ceb6df-64d9-4e2b-bc84-e77a804a13db] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 2 22:11:50.439: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:10Z generation:2 name:name1 resourceVersion:4865794 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f206f038-22a3-4eb2-a8b1-6f73bf4a1fcd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 2 22:12:00.447: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T22:11:20Z generation:2 name:name2 resourceVersion:4865824 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:79ceb6df-64d9-4e2b-bc84-e77a804a13db] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:12:10.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3757" for this suite. • [SLOW TEST:61.340 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":230,"skipped":3865,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:12:10.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3686 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 22:12:11.069: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 2 22:12:37.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.49:8080/dial?request=hostname&protocol=http&host=10.244.1.238&port=8080&tries=1'] Namespace:pod-network-test-3686 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:12:37.191: INFO: >>> kubeConfig: /root/.kube/config I0402 22:12:37.228308 6 log.go:172] (0xc004979ad0) (0xc001e67860) Create stream I0402 22:12:37.228332 6 log.go:172] (0xc004979ad0) (0xc001e67860) Stream added, broadcasting: 1 I0402 22:12:37.230091 6 log.go:172] (0xc004979ad0) Reply frame received for 1 I0402 22:12:37.230125 6 log.go:172] (0xc004979ad0) (0xc001e67cc0) Create stream I0402 22:12:37.230139 6 log.go:172] (0xc004979ad0) (0xc001e67cc0) Stream added, broadcasting: 3 I0402 22:12:37.230958 6 log.go:172] (0xc004979ad0) Reply frame received for 3 I0402 22:12:37.230980 6 log.go:172] (0xc004979ad0) (0xc001e67f40) Create stream I0402 22:12:37.230989 6 log.go:172] (0xc004979ad0) (0xc001e67f40) Stream added, broadcasting: 5 I0402 22:12:37.231741 6 log.go:172] (0xc004979ad0) Reply frame received for 5 I0402 22:12:37.324718 6 log.go:172] (0xc004979ad0) Data frame received for 3 I0402 22:12:37.324755 6 log.go:172] (0xc001e67cc0) (3) Data frame handling I0402 22:12:37.324791 6 log.go:172] (0xc001e67cc0) (3) Data frame sent I0402 22:12:37.324932 6 log.go:172] (0xc004979ad0) Data frame received for 3 I0402 22:12:37.324951 6 log.go:172] (0xc001e67cc0) (3) Data frame handling I0402 22:12:37.325018 6 log.go:172] (0xc004979ad0) Data frame received for 5 I0402 22:12:37.325051 6 log.go:172] (0xc001e67f40) (5) Data frame handling I0402 22:12:37.327011 6 log.go:172] (0xc004979ad0) Data frame received for 1 I0402 22:12:37.327052 6 log.go:172] (0xc001e67860) (1) Data frame handling I0402 22:12:37.327076 6 log.go:172] (0xc001e67860) (1) Data frame sent I0402 22:12:37.327132 6 log.go:172] (0xc004979ad0) (0xc001e67860) Stream removed, broadcasting: 1 I0402 22:12:37.327176 6 log.go:172] (0xc004979ad0) Go away received I0402 22:12:37.327268 6 log.go:172] (0xc004979ad0) (0xc001e67860) Stream removed, broadcasting: 1 I0402 22:12:37.327287 6 log.go:172] (0xc004979ad0) (0xc001e67cc0) Stream removed, broadcasting: 3 I0402 22:12:37.327302 6 log.go:172] (0xc004979ad0) (0xc001e67f40) Stream removed, broadcasting: 5 Apr 2 22:12:37.327: INFO: Waiting for responses: map[] Apr 2 22:12:37.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.49:8080/dial?request=hostname&protocol=http&host=10.244.2.48&port=8080&tries=1'] Namespace:pod-network-test-3686 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 22:12:37.331: INFO: >>> kubeConfig: /root/.kube/config I0402 22:12:37.356509 6 log.go:172] (0xc004e922c0) (0xc000fca6e0) Create stream I0402 22:12:37.356536 6 log.go:172] (0xc004e922c0) (0xc000fca6e0) Stream added, broadcasting: 1 I0402 22:12:37.364448 6 log.go:172] (0xc004e922c0) Reply frame received for 1 I0402 22:12:37.364491 6 log.go:172] (0xc004e922c0) (0xc000fca0a0) Create stream I0402 22:12:37.364503 6 log.go:172] (0xc004e922c0) (0xc000fca0a0) Stream added, broadcasting: 3 I0402 22:12:37.365339 6 log.go:172] (0xc004e922c0) Reply frame received for 3 I0402 22:12:37.365367 6 log.go:172] (0xc004e922c0) (0xc001e66000) Create stream I0402 22:12:37.365379 6 log.go:172] (0xc004e922c0) (0xc001e66000) Stream added, broadcasting: 5 I0402 22:12:37.366047 6 log.go:172] (0xc004e922c0) Reply frame received for 5 I0402 22:12:37.439993 6 log.go:172] (0xc004e922c0) Data frame received for 3 I0402 22:12:37.440064 6 log.go:172] (0xc000fca0a0) (3) Data frame handling I0402 22:12:37.440124 6 log.go:172] (0xc000fca0a0) (3) Data frame sent I0402 22:12:37.440498 6 log.go:172] (0xc004e922c0) Data frame received for 3 I0402 22:12:37.440513 6 log.go:172] (0xc000fca0a0) (3) Data frame handling I0402 22:12:37.440529 6 log.go:172] (0xc004e922c0) Data frame received for 5 I0402 22:12:37.440538 6 log.go:172] (0xc001e66000) (5) Data frame handling I0402 22:12:37.442096 6 log.go:172] (0xc004e922c0) Data frame received for 1 I0402 22:12:37.442113 6 log.go:172] (0xc000fca6e0) (1) Data frame handling I0402 22:12:37.442121 6 log.go:172] (0xc000fca6e0) (1) Data frame sent I0402 22:12:37.442143 6 log.go:172] (0xc004e922c0) (0xc000fca6e0) Stream removed, broadcasting: 1 I0402 22:12:37.442190 6 log.go:172] (0xc004e922c0) Go away received I0402 22:12:37.442246 6 log.go:172] (0xc004e922c0) (0xc000fca6e0) Stream removed, broadcasting: 1 I0402 22:12:37.442273 6 log.go:172] (0xc004e922c0) (0xc000fca0a0) Stream removed, broadcasting: 3 I0402 22:12:37.442289 6 log.go:172] (0xc004e922c0) (0xc001e66000) Stream removed, broadcasting: 5 Apr 2 22:12:37.442: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:12:37.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3686" for this suite. • [SLOW TEST:26.484 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3866,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:12:37.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:13:37.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9057" for this suite. • [SLOW TEST:60.107 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3878,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:13:37.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-1430886a-4ebd-4276-a527-c060b06d754e STEP: Creating a pod to test consume secrets Apr 2 22:13:37.624: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea" in namespace "projected-644" to be "success or failure" Apr 2 22:13:37.628: INFO: Pod "pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898845ms Apr 2 22:13:39.631: INFO: Pod "pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007201374s Apr 2 22:13:41.635: INFO: Pod "pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011148682s STEP: Saw pod success Apr 2 22:13:41.635: INFO: Pod "pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea" satisfied condition "success or failure" Apr 2 22:13:41.638: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea container projected-secret-volume-test: STEP: delete the pod Apr 2 22:13:41.670: INFO: Waiting for pod pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea to disappear Apr 2 22:13:41.690: INFO: Pod pod-projected-secrets-8e4eb3fd-d8e0-4a17-9492-a97eda99fcea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:13:41.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-644" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3895,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:13:41.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-629c4a7d-12db-441b-83f1-661a758796ba STEP: Creating a pod to test consume secrets Apr 2 22:13:41.796: INFO: Waiting up to 5m0s for pod "pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3" in namespace "secrets-5244" to be "success or failure" Apr 2 22:13:41.800: INFO: Pod "pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.963844ms Apr 2 22:13:43.812: INFO: Pod "pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0166669s Apr 2 22:13:45.816: INFO: Pod "pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020810753s STEP: Saw pod success Apr 2 22:13:45.817: INFO: Pod "pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3" satisfied condition "success or failure" Apr 2 22:13:45.820: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3 container secret-volume-test: STEP: delete the pod Apr 2 22:13:45.869: INFO: Waiting for pod pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3 to disappear Apr 2 22:13:45.884: INFO: Pod pod-secrets-7fc92c21-db8d-4b7c-b116-edc0f422bbc3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:13:45.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5244" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3905,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:13:45.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c084515d-d881-4b7a-9c96-f19fb86b9735 STEP: Creating a pod to test consume secrets Apr 2 22:13:45.945: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688" in namespace "projected-7271" to be "success or failure" Apr 2 22:13:45.962: INFO: Pod "pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688": Phase="Pending", Reason="", readiness=false. Elapsed: 17.178258ms Apr 2 22:13:47.965: INFO: Pod "pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019833568s Apr 2 22:13:49.969: INFO: Pod "pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024065707s STEP: Saw pod success Apr 2 22:13:49.969: INFO: Pod "pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688" satisfied condition "success or failure" Apr 2 22:13:49.973: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688 container projected-secret-volume-test: STEP: delete the pod Apr 2 22:13:49.999: INFO: Waiting for pod pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688 to disappear Apr 2 22:13:50.011: INFO: Pod pod-projected-secrets-e825aa08-323c-4587-b78b-008165961688 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:13:50.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7271" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3905,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:13:50.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:13:50.059: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:13:51.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3570" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":236,"skipped":3916,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:13:51.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Apr 2 22:13:51.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-301 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 2 22:13:54.024: INFO: stderr: "" Apr 2 22:13:54.024: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 2 22:13:54.024: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 2 22:13:54.024: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-301" to be "running and ready, or succeeded" Apr 2 22:13:54.052: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 28.677134ms Apr 2 22:13:56.056: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032602317s Apr 2 22:13:58.061: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.036974719s Apr 2 22:13:58.061: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 2 22:13:58.061: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 2 22:13:58.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301' Apr 2 22:13:58.189: INFO: stderr: "" Apr 2 22:13:58.189: INFO: stdout: "I0402 22:13:56.347947 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/bxjd 305\nI0402 22:13:56.548158 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/4mx4 284\nI0402 22:13:56.748226 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/fwq 522\nI0402 22:13:56.948118 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/nql8 574\nI0402 22:13:57.148113 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/k4r 403\nI0402 22:13:57.348110 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/c8x 440\nI0402 22:13:57.548162 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/hjd 472\nI0402 22:13:57.748099 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/92n 209\nI0402 22:13:57.948119 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/pdn 263\nI0402 22:13:58.148175 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/hkwt 205\n" STEP: limiting log lines Apr 2 22:13:58.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301 --tail=1' Apr 2 22:13:58.314: INFO: stderr: "" Apr 2 22:13:58.314: INFO: stdout: "I0402 22:13:58.148175 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/hkwt 205\n" Apr 2 22:13:58.314: INFO: got output "I0402 22:13:58.148175 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/hkwt 205\n" STEP: limiting log bytes Apr 2 22:13:58.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301 --limit-bytes=1' Apr 2 22:13:58.422: INFO: stderr: "" Apr 2 22:13:58.422: INFO: stdout: "I" Apr 2 22:13:58.422: INFO: got output "I" STEP: exposing timestamps Apr 2 22:13:58.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301 --tail=1 --timestamps' Apr 2 22:13:58.535: INFO: stderr: "" Apr 2 22:13:58.535: INFO: stdout: "2020-04-02T22:13:58.348268381Z I0402 22:13:58.348132 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/xlb 370\n" Apr 2 22:13:58.535: INFO: got output "2020-04-02T22:13:58.348268381Z I0402 22:13:58.348132 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/xlb 370\n" STEP: restricting to a time range Apr 2 22:14:01.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301 --since=1s' Apr 2 22:14:01.149: INFO: stderr: "" Apr 2 22:14:01.149: INFO: stdout: "I0402 22:14:00.148141 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/vbm5 453\nI0402 22:14:00.348156 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gk27 554\nI0402 22:14:00.548153 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/vhj6 347\nI0402 22:14:00.748111 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/lfp 496\nI0402 22:14:00.948104 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/nq7s 567\n" Apr 2 22:14:01.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-301 --since=24h' Apr 2 22:14:01.269: INFO: stderr: "" Apr 2 22:14:01.269: INFO: stdout: "I0402 22:13:56.347947 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/bxjd 305\nI0402 22:13:56.548158 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/4mx4 284\nI0402 22:13:56.748226 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/fwq 522\nI0402 22:13:56.948118 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/nql8 574\nI0402 22:13:57.148113 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/k4r 403\nI0402 22:13:57.348110 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/c8x 440\nI0402 22:13:57.548162 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/hjd 472\nI0402 22:13:57.748099 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/92n 209\nI0402 22:13:57.948119 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/pdn 263\nI0402 22:13:58.148175 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/hkwt 205\nI0402 22:13:58.348132 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/xlb 370\nI0402 22:13:58.548116 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/wkhj 260\nI0402 22:13:58.748122 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/tlr 338\nI0402 22:13:58.948145 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/z8ll 575\nI0402 22:13:59.148116 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/hdv 260\nI0402 22:13:59.348133 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/jfcw 264\nI0402 22:13:59.548146 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/grw 488\nI0402 22:13:59.748092 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/2dhz 523\nI0402 22:13:59.948172 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/98s 582\nI0402 22:14:00.148141 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/vbm5 453\nI0402 22:14:00.348156 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gk27 554\nI0402 22:14:00.548153 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/vhj6 347\nI0402 22:14:00.748111 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/lfp 496\nI0402 22:14:00.948104 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/nq7s 567\nI0402 22:14:01.148163 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/tb2t 519\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Apr 2 22:14:01.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-301' Apr 2 22:14:09.478: INFO: stderr: "" Apr 2 22:14:09.478: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:09.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-301" for this suite. • [SLOW TEST:18.383 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":237,"skipped":3918,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:09.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-1e6fbc82-5ee5-4486-b22f-0dd8628f2305 STEP: Creating a pod to test consume configMaps Apr 2 22:14:09.586: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7" in namespace "projected-7871" to be "success or failure" Apr 2 22:14:09.591: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344932ms Apr 2 22:14:11.646: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05930384s Apr 2 22:14:13.650: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063338532s Apr 2 22:14:15.711: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124429552s Apr 2 22:14:17.714: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127388269s STEP: Saw pod success Apr 2 22:14:17.714: INFO: Pod "pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7" satisfied condition "success or failure" Apr 2 22:14:17.716: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7 container projected-configmap-volume-test: STEP: delete the pod Apr 2 22:14:17.742: INFO: Waiting for pod pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7 to disappear Apr 2 22:14:17.753: INFO: Pod pod-projected-configmaps-18588a23-dc3e-4eb1-bc59-e58cbf1db8e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:17.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7871" for this suite. • [SLOW TEST:8.259 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3937,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:17.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-6441f665-7824-4584-a9df-eb72c1824330 STEP: Creating configMap with name cm-test-opt-upd-d963e53d-558c-41f6-a3a9-0e1aa002f7b4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6441f665-7824-4584-a9df-eb72c1824330 STEP: Updating configmap cm-test-opt-upd-d963e53d-558c-41f6-a3a9-0e1aa002f7b4 STEP: Creating configMap with name cm-test-opt-create-233847f0-5eca-414e-9616-b410c02006e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:28.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3133" for this suite. • [SLOW TEST:10.458 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3939,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:28.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4143/configmap-test-d8f214b3-894a-42ab-986c-10b7ee9a91f0 STEP: Creating a pod to test consume configMaps Apr 2 22:14:28.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9" in namespace "configmap-4143" to be "success or failure" Apr 2 22:14:29.280: INFO: Pod "pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9": Phase="Pending", Reason="", readiness=false. Elapsed: 283.768117ms Apr 2 22:14:31.283: INFO: Pod "pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286953721s Apr 2 22:14:33.287: INFO: Pod "pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290518551s STEP: Saw pod success Apr 2 22:14:33.287: INFO: Pod "pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9" satisfied condition "success or failure" Apr 2 22:14:33.292: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9 container env-test: STEP: delete the pod Apr 2 22:14:33.408: INFO: Waiting for pod pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9 to disappear Apr 2 22:14:33.463: INFO: Pod pod-configmaps-27a6e089-49cc-47fe-8508-be6df32aadb9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:33.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4143" for this suite. • [SLOW TEST:5.333 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3953,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:33.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 2 22:14:33.872: INFO: Waiting up to 5m0s for pod "pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f" in namespace "emptydir-6833" to be "success or failure" Apr 2 22:14:33.895: INFO: Pod "pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.209852ms Apr 2 22:14:36.249: INFO: Pod "pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376935757s Apr 2 22:14:38.254: INFO: Pod "pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.381344524s STEP: Saw pod success Apr 2 22:14:38.254: INFO: Pod "pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f" satisfied condition "success or failure" Apr 2 22:14:38.257: INFO: Trying to get logs from node jerma-worker pod pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f container test-container: STEP: delete the pod Apr 2 22:14:38.355: INFO: Waiting for pod pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f to disappear Apr 2 22:14:38.400: INFO: Pod pod-b48dd66b-7b09-4bb0-b1b7-cb975fb50c7f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:38.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6833" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3964,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:38.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:45.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8905" for this suite. • [SLOW TEST:7.124 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":242,"skipped":3977,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:45.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 22:14:45.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4824' Apr 2 22:14:45.737: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 2 22:14:45.737: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Apr 2 22:14:47.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4824' Apr 2 22:14:47.974: INFO: stderr: "" Apr 2 22:14:47.974: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:47.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4824" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":243,"skipped":3993,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:47.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 22:14:48.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19" in namespace "projected-1648" to be "success or failure" Apr 2 22:14:48.195: INFO: Pod "downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19": Phase="Pending", Reason="", readiness=false. Elapsed: 59.119613ms Apr 2 22:14:50.199: INFO: Pod "downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062951204s Apr 2 22:14:52.203: INFO: Pod "downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066801706s STEP: Saw pod success Apr 2 22:14:52.203: INFO: Pod "downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19" satisfied condition "success or failure" Apr 2 22:14:52.206: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19 container client-container: STEP: delete the pod Apr 2 22:14:52.268: INFO: Waiting for pod downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19 to disappear Apr 2 22:14:52.271: INFO: Pod downwardapi-volume-9ecbce78-a717-454d-8a21-d0ad02f87d19 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1648" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:52.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 22:14:52.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8" in namespace "projected-6381" to be "success or failure" Apr 2 22:14:52.346: INFO: Pod "downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488399ms Apr 2 22:14:54.350: INFO: Pod "downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007334354s Apr 2 22:14:56.354: INFO: Pod "downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011460789s STEP: Saw pod success Apr 2 22:14:56.354: INFO: Pod "downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8" satisfied condition "success or failure" Apr 2 22:14:56.357: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8 container client-container: STEP: delete the pod Apr 2 22:14:56.379: INFO: Waiting for pod downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8 to disappear Apr 2 22:14:56.383: INFO: Pod downwardapi-volume-829ddf81-6cd9-40e6-a703-d90a4cc07fc8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:56.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6381" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4036,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:56.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:14:56.508: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.359399ms) Apr 2 22:14:56.511: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.36476ms) Apr 2 22:14:56.514: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.060428ms) Apr 2 22:14:56.518: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.2527ms) Apr 2 22:14:56.521: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.504744ms) Apr 2 22:14:56.525: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.605395ms) Apr 2 22:14:56.528: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.616918ms) Apr 2 22:14:56.532: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.78542ms) Apr 2 22:14:56.536: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.461177ms) Apr 2 22:14:56.540: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.753812ms) Apr 2 22:14:56.543: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.630268ms) Apr 2 22:14:56.547: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.810119ms) Apr 2 22:14:56.551: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.690282ms) Apr 2 22:14:56.555: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.976252ms) Apr 2 22:14:56.558: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.342574ms) Apr 2 22:14:56.562: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.721421ms) Apr 2 22:14:56.566: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.632696ms) Apr 2 22:14:56.569: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.662017ms) Apr 2 22:14:56.573: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.615422ms) Apr 2 22:14:56.577: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.985089ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:14:56.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6706" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":246,"skipped":4036,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:14:56.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 2 22:14:57.322: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 2 22:14:59.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462497, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462497, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462497, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462497, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:15:02.364: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:15:02.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:15:03.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6473" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.144 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":247,"skipped":4041,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:15:03.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:15:03.867: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 2 22:15:03.887: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:03.898: INFO: Number of nodes with available pods: 0 Apr 2 22:15:03.898: INFO: Node jerma-worker is running more than one daemon pod Apr 2 22:15:04.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:04.906: INFO: Number of nodes with available pods: 0 Apr 2 22:15:04.906: INFO: Node jerma-worker is running more than one daemon pod Apr 2 22:15:05.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:05.907: INFO: Number of nodes with available pods: 0 Apr 2 22:15:05.907: INFO: Node jerma-worker is running more than one daemon pod Apr 2 22:15:06.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:06.941: INFO: Number of nodes with available pods: 0 Apr 2 22:15:06.941: INFO: Node jerma-worker is running more than one daemon pod Apr 2 22:15:07.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:07.907: INFO: Number of nodes with available pods: 2 Apr 2 22:15:07.907: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 2 22:15:07.940: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:07.940: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:07.994: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:08.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:08.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:09.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:10.000: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:10.000: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:10.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:10.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:10.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:11.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:12.017: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:12.017: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:12.017: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:12.021: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:12.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:12.999: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:12.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:13.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:13.998: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:13.998: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:13.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:14.001: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:14.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:14.999: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:14.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:15.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:15.998: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:15.998: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:15.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:16.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:16.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:16.999: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:16.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:17.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:17.999: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:17.999: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:17.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:18.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:18.998: INFO: Wrong image for pod: daemon-set-mvzwp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:18.998: INFO: Pod daemon-set-mvzwp is not available Apr 2 22:15:18.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:19.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:19.998: INFO: Pod daemon-set-df8qb is not available Apr 2 22:15:19.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:20.001: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:21.090: INFO: Pod daemon-set-df8qb is not available Apr 2 22:15:21.090: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:21.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:21.998: INFO: Pod daemon-set-df8qb is not available Apr 2 22:15:21.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:22.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:23.029: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:23.034: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:24.003: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:24.003: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:24.007: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:24.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:24.998: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:25.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:25.998: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:25.998: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:26.048: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:26.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:26.999: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:27.036: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:27.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:27.999: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:28.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:28.999: INFO: Wrong image for pod: daemon-set-xwgwv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 22:15:28.999: INFO: Pod daemon-set-xwgwv is not available Apr 2 22:15:29.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:29.999: INFO: Pod daemon-set-tt675 is not available Apr 2 22:15:30.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 2 22:15:30.007: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:30.010: INFO: Number of nodes with available pods: 1 Apr 2 22:15:30.010: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 22:15:31.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:31.017: INFO: Number of nodes with available pods: 1 Apr 2 22:15:31.017: INFO: Node jerma-worker2 is running more than one daemon pod Apr 2 22:15:32.014: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 22:15:32.017: INFO: Number of nodes with available pods: 2 Apr 2 22:15:32.017: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1103, will wait for the garbage collector to delete the pods Apr 2 22:15:32.086: INFO: Deleting DaemonSet.extensions daemon-set took: 5.586191ms Apr 2 22:15:32.387: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.217072ms Apr 2 22:15:39.290: INFO: Number of nodes with available pods: 0 Apr 2 22:15:39.290: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 22:15:39.293: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1103/daemonsets","resourceVersion":"4867059"},"items":null} Apr 2 22:15:39.296: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1103/pods","resourceVersion":"4867059"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:15:39.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1103" for this suite. • [SLOW TEST:35.559 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":248,"skipped":4042,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:15:39.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:15:44.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2781" for this suite. • [SLOW TEST:5.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":249,"skipped":4042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:15:44.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8461 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8461 STEP: Deleting pre-stop pod Apr 2 22:15:57.651: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:15:57.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8461" for this suite. • [SLOW TEST:13.209 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":250,"skipped":4105,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:15:57.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-a3f72168-7690-4d9f-a736-3ba76e9225c2 STEP: Creating secret with name s-test-opt-upd-7fa8414d-45d8-4c48-aeb0-4c9103780a68 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a3f72168-7690-4d9f-a736-3ba76e9225c2 STEP: Updating secret s-test-opt-upd-7fa8414d-45d8-4c48-aeb0-4c9103780a68 STEP: Creating secret with name s-test-opt-create-02c91b40-074c-4a67-89a2-a09b168f2a21 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:24.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2006" for this suite. • [SLOW TEST:86.790 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4106,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:24.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-60905d1f-547e-461a-a5b4-aa057022b619 STEP: Creating a pod to test consume secrets Apr 2 22:17:24.666: INFO: Waiting up to 5m0s for pod "pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab" in namespace "secrets-9491" to be "success or failure" Apr 2 22:17:24.670: INFO: Pod "pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484668ms Apr 2 22:17:26.674: INFO: Pod "pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007753735s Apr 2 22:17:28.678: INFO: Pod "pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011706857s STEP: Saw pod success Apr 2 22:17:28.678: INFO: Pod "pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab" satisfied condition "success or failure" Apr 2 22:17:28.681: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab container secret-volume-test: STEP: delete the pod Apr 2 22:17:28.728: INFO: Waiting for pod pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab to disappear Apr 2 22:17:28.742: INFO: Pod pod-secrets-02933b9a-f727-4114-9820-c7b8304345ab no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:28.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9491" for this suite. STEP: Destroying namespace "secret-namespace-2803" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4122,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:28.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:17:28.833: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 22:17:31.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3620 create -f -' Apr 2 22:17:34.895: INFO: stderr: "" Apr 2 22:17:34.895: INFO: stdout: "e2e-test-crd-publish-openapi-577-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 22:17:34.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3620 delete e2e-test-crd-publish-openapi-577-crds test-cr' Apr 2 22:17:35.007: INFO: stderr: "" Apr 2 22:17:35.007: INFO: stdout: "e2e-test-crd-publish-openapi-577-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 2 22:17:35.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3620 apply -f -' Apr 2 22:17:35.261: INFO: stderr: "" Apr 2 22:17:35.262: INFO: stdout: "e2e-test-crd-publish-openapi-577-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 22:17:35.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3620 delete e2e-test-crd-publish-openapi-577-crds test-cr' Apr 2 22:17:35.375: INFO: stderr: "" Apr 2 22:17:35.375: INFO: stdout: "e2e-test-crd-publish-openapi-577-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 22:17:35.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-577-crds' Apr 2 22:17:35.602: INFO: stderr: "" Apr 2 22:17:35.602: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-577-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:37.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3620" for this suite. • [SLOW TEST:8.838 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":253,"skipped":4126,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:37.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 2 22:17:37.692: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:37.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1075" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":254,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:37.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 2 22:17:37.854: INFO: Waiting up to 5m0s for pod "downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85" in namespace "downward-api-1724" to be "success or failure" Apr 2 22:17:37.893: INFO: Pod "downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85": Phase="Pending", Reason="", readiness=false. Elapsed: 39.033075ms Apr 2 22:17:39.899: INFO: Pod "downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044112162s Apr 2 22:17:41.902: INFO: Pod "downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04771635s STEP: Saw pod success Apr 2 22:17:41.902: INFO: Pod "downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85" satisfied condition "success or failure" Apr 2 22:17:41.905: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85 container dapi-container: STEP: delete the pod Apr 2 22:17:41.923: INFO: Waiting for pod downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85 to disappear Apr 2 22:17:41.959: INFO: Pod downward-api-0e553c7e-908f-4a6b-97e8-ff3987082a85 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:41.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1724" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4186,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:41.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7742 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7742 to expose endpoints map[] Apr 2 22:17:42.092: INFO: Get endpoints failed (13.501935ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 2 22:17:43.096: INFO: successfully validated that service endpoint-test2 in namespace services-7742 exposes endpoints map[] (1.016953676s elapsed) STEP: Creating pod pod1 in namespace services-7742 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7742 to expose endpoints map[pod1:[80]] Apr 2 22:17:47.156: INFO: successfully validated that service endpoint-test2 in namespace services-7742 exposes endpoints map[pod1:[80]] (4.052752031s elapsed) STEP: Creating pod pod2 in namespace services-7742 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7742 to expose endpoints map[pod1:[80] pod2:[80]] Apr 2 22:17:51.356: INFO: successfully validated that service endpoint-test2 in namespace services-7742 exposes endpoints map[pod1:[80] pod2:[80]] (4.175059693s elapsed) STEP: Deleting pod pod1 in namespace services-7742 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7742 to expose endpoints map[pod2:[80]] Apr 2 22:17:52.403: INFO: successfully validated that service endpoint-test2 in namespace services-7742 exposes endpoints map[pod2:[80]] (1.042351682s elapsed) STEP: Deleting pod pod2 in namespace services-7742 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7742 to expose endpoints map[] Apr 2 22:17:53.430: INFO: successfully validated that service endpoint-test2 in namespace services-7742 exposes endpoints map[] (1.02133696s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:53.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7742" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":256,"skipped":4203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:53.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 2 22:17:53.601: INFO: Waiting up to 5m0s for pod "pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd" in namespace "emptydir-982" to be "success or failure" Apr 2 22:17:53.611: INFO: Pod "pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168961ms Apr 2 22:17:55.616: INFO: Pod "pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014505808s Apr 2 22:17:57.620: INFO: Pod "pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018641282s STEP: Saw pod success Apr 2 22:17:57.620: INFO: Pod "pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd" satisfied condition "success or failure" Apr 2 22:17:57.623: INFO: Trying to get logs from node jerma-worker2 pod pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd container test-container: STEP: delete the pod Apr 2 22:17:57.643: INFO: Waiting for pod pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd to disappear Apr 2 22:17:57.695: INFO: Pod pod-bf0735c3-18c7-4aa3-9a93-26e92622bdfd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:17:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-982" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4235,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:17:57.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 2 22:17:57.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-856' Apr 2 22:17:58.085: INFO: stderr: "" Apr 2 22:17:58.085: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 22:17:59.089: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:17:59.089: INFO: Found 0 / 1 Apr 2 22:18:00.157: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:18:00.157: INFO: Found 0 / 1 Apr 2 22:18:01.090: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:18:01.090: INFO: Found 0 / 1 Apr 2 22:18:02.090: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:18:02.090: INFO: Found 1 / 1 Apr 2 22:18:02.090: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 2 22:18:02.093: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:18:02.093: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 22:18:02.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-6nx56 --namespace=kubectl-856 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 2 22:18:02.218: INFO: stderr: "" Apr 2 22:18:02.218: INFO: stdout: "pod/agnhost-master-6nx56 patched\n" STEP: checking annotations Apr 2 22:18:02.230: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 22:18:02.230: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:18:02.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-856" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":258,"skipped":4237,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:18:02.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 22:18:02.848: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 22:18:04.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462682, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462682, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462682, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462682, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:18:07.890: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:18:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3370" for this suite. STEP: Destroying namespace "webhook-3370-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.946 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":259,"skipped":4237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:18:08.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:18:08.227: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-746 I0402 22:18:08.252002 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-746, replica count: 1 I0402 22:18:09.302365 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 22:18:10.302553 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 22:18:11.302837 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 22:18:12.303106 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 22:18:12.430: INFO: Created: latency-svc-l95fr Apr 2 22:18:12.448: INFO: Got endpoints: latency-svc-l95fr [45.150188ms] Apr 2 22:18:12.523: INFO: Created: latency-svc-7rlnk Apr 2 22:18:12.526: INFO: Got endpoints: latency-svc-7rlnk [77.884017ms] Apr 2 22:18:12.580: INFO: Created: latency-svc-cspxh Apr 2 22:18:12.599: INFO: Got endpoints: latency-svc-cspxh [149.16021ms] Apr 2 22:18:12.623: INFO: Created: latency-svc-qbpn4 Apr 2 22:18:12.666: INFO: Got endpoints: latency-svc-qbpn4 [217.93818ms] Apr 2 22:18:12.695: INFO: Created: latency-svc-d4zj8 Apr 2 22:18:12.726: INFO: Got endpoints: latency-svc-d4zj8 [277.998629ms] Apr 2 22:18:12.761: INFO: Created: latency-svc-gvpdp Apr 2 22:18:12.828: INFO: Got endpoints: latency-svc-gvpdp [379.715299ms] Apr 2 22:18:12.830: INFO: Created: latency-svc-lh67n Apr 2 22:18:12.886: INFO: Got endpoints: latency-svc-lh67n [436.936309ms] Apr 2 22:18:12.916: INFO: Created: latency-svc-8mpf5 Apr 2 22:18:12.926: INFO: Got endpoints: latency-svc-8mpf5 [477.715402ms] Apr 2 22:18:12.989: INFO: Created: latency-svc-n46tn Apr 2 22:18:13.001: INFO: Got endpoints: latency-svc-n46tn [551.486755ms] Apr 2 22:18:13.031: INFO: Created: latency-svc-9f4kk Apr 2 22:18:13.042: INFO: Got endpoints: latency-svc-9f4kk [592.389661ms] Apr 2 22:18:13.097: INFO: Created: latency-svc-4vd55 Apr 2 22:18:13.121: INFO: Got endpoints: latency-svc-4vd55 [672.553743ms] Apr 2 22:18:13.169: INFO: Created: latency-svc-58nkk Apr 2 22:18:13.191: INFO: Got endpoints: latency-svc-58nkk [742.352152ms] Apr 2 22:18:13.235: INFO: Created: latency-svc-8fcr6 Apr 2 22:18:13.250: INFO: Got endpoints: latency-svc-8fcr6 [800.632916ms] Apr 2 22:18:13.300: INFO: Created: latency-svc-4hhwt Apr 2 22:18:13.316: INFO: Got endpoints: latency-svc-4hhwt [867.770971ms] Apr 2 22:18:13.373: INFO: Created: latency-svc-mtz4b Apr 2 22:18:13.383: INFO: Got endpoints: latency-svc-mtz4b [934.965135ms] Apr 2 22:18:13.420: INFO: Created: latency-svc-qqzln Apr 2 22:18:13.434: INFO: Got endpoints: latency-svc-qqzln [983.997026ms] Apr 2 22:18:13.456: INFO: Created: latency-svc-nsb8l Apr 2 22:18:13.522: INFO: Got endpoints: latency-svc-nsb8l [996.218539ms] Apr 2 22:18:13.558: INFO: Created: latency-svc-ksrgc Apr 2 22:18:13.569: INFO: Got endpoints: latency-svc-ksrgc [970.73259ms] Apr 2 22:18:13.673: INFO: Created: latency-svc-pbvtb Apr 2 22:18:13.690: INFO: Got endpoints: latency-svc-pbvtb [1.023513059s] Apr 2 22:18:13.727: INFO: Created: latency-svc-9fzkw Apr 2 22:18:13.750: INFO: Got endpoints: latency-svc-9fzkw [1.023552569s] Apr 2 22:18:13.852: INFO: Created: latency-svc-w42c6 Apr 2 22:18:13.858: INFO: Got endpoints: latency-svc-w42c6 [1.029600206s] Apr 2 22:18:13.907: INFO: Created: latency-svc-hrbg4 Apr 2 22:18:13.931: INFO: Got endpoints: latency-svc-hrbg4 [1.044657964s] Apr 2 22:18:14.015: INFO: Created: latency-svc-zp9lf Apr 2 22:18:14.038: INFO: Got endpoints: latency-svc-zp9lf [1.112264956s] Apr 2 22:18:14.123: INFO: Created: latency-svc-9wjvz Apr 2 22:18:14.141: INFO: Got endpoints: latency-svc-9wjvz [1.140026011s] Apr 2 22:18:14.207: INFO: Created: latency-svc-6984g Apr 2 22:18:14.295: INFO: Got endpoints: latency-svc-6984g [1.252948801s] Apr 2 22:18:14.334: INFO: Created: latency-svc-zmz7f Apr 2 22:18:14.358: INFO: Got endpoints: latency-svc-zmz7f [1.236530231s] Apr 2 22:18:14.387: INFO: Created: latency-svc-fmgwx Apr 2 22:18:14.441: INFO: Got endpoints: latency-svc-fmgwx [1.250490916s] Apr 2 22:18:14.471: INFO: Created: latency-svc-v4pkq Apr 2 22:18:14.495: INFO: Got endpoints: latency-svc-v4pkq [1.244486989s] Apr 2 22:18:14.526: INFO: Created: latency-svc-pnh6k Apr 2 22:18:14.576: INFO: Got endpoints: latency-svc-pnh6k [1.260317725s] Apr 2 22:18:14.591: INFO: Created: latency-svc-j2jv4 Apr 2 22:18:14.604: INFO: Got endpoints: latency-svc-j2jv4 [1.220926172s] Apr 2 22:18:14.626: INFO: Created: latency-svc-xm72b Apr 2 22:18:14.640: INFO: Got endpoints: latency-svc-xm72b [1.206322152s] Apr 2 22:18:14.663: INFO: Created: latency-svc-p5bkg Apr 2 22:18:14.708: INFO: Got endpoints: latency-svc-p5bkg [1.185717084s] Apr 2 22:18:14.741: INFO: Created: latency-svc-8xh96 Apr 2 22:18:14.790: INFO: Got endpoints: latency-svc-8xh96 [1.220096094s] Apr 2 22:18:14.864: INFO: Created: latency-svc-n58x5 Apr 2 22:18:14.871: INFO: Got endpoints: latency-svc-n58x5 [1.181294379s] Apr 2 22:18:14.896: INFO: Created: latency-svc-cvjvd Apr 2 22:18:14.906: INFO: Got endpoints: latency-svc-cvjvd [1.155660569s] Apr 2 22:18:14.927: INFO: Created: latency-svc-v5msf Apr 2 22:18:14.957: INFO: Got endpoints: latency-svc-v5msf [1.09967875s] Apr 2 22:18:15.014: INFO: Created: latency-svc-jp48j Apr 2 22:18:15.034: INFO: Got endpoints: latency-svc-jp48j [1.103378944s] Apr 2 22:18:15.064: INFO: Created: latency-svc-f4464 Apr 2 22:18:15.083: INFO: Got endpoints: latency-svc-f4464 [1.044342618s] Apr 2 22:18:15.112: INFO: Created: latency-svc-4h9bg Apr 2 22:18:15.151: INFO: Got endpoints: latency-svc-4h9bg [1.010390986s] Apr 2 22:18:15.185: INFO: Created: latency-svc-lh4hh Apr 2 22:18:15.202: INFO: Got endpoints: latency-svc-lh4hh [906.229508ms] Apr 2 22:18:15.227: INFO: Created: latency-svc-mj2dg Apr 2 22:18:15.242: INFO: Got endpoints: latency-svc-mj2dg [884.795571ms] Apr 2 22:18:15.307: INFO: Created: latency-svc-r2v2x Apr 2 22:18:15.315: INFO: Got endpoints: latency-svc-r2v2x [873.419739ms] Apr 2 22:18:15.346: INFO: Created: latency-svc-ncj72 Apr 2 22:18:15.358: INFO: Got endpoints: latency-svc-ncj72 [862.674486ms] Apr 2 22:18:15.389: INFO: Created: latency-svc-4fpn4 Apr 2 22:18:15.439: INFO: Got endpoints: latency-svc-4fpn4 [862.889057ms] Apr 2 22:18:15.582: INFO: Created: latency-svc-h4vgk Apr 2 22:18:15.588: INFO: Got endpoints: latency-svc-h4vgk [983.265249ms] Apr 2 22:18:15.616: INFO: Created: latency-svc-wwps2 Apr 2 22:18:15.634: INFO: Got endpoints: latency-svc-wwps2 [993.780713ms] Apr 2 22:18:15.659: INFO: Created: latency-svc-n9nbn Apr 2 22:18:15.762: INFO: Got endpoints: latency-svc-n9nbn [1.054035585s] Apr 2 22:18:15.766: INFO: Created: latency-svc-mrlcf Apr 2 22:18:15.772: INFO: Got endpoints: latency-svc-mrlcf [982.390064ms] Apr 2 22:18:15.815: INFO: Created: latency-svc-5p5gf Apr 2 22:18:15.833: INFO: Got endpoints: latency-svc-5p5gf [961.489891ms] Apr 2 22:18:15.906: INFO: Created: latency-svc-z24p4 Apr 2 22:18:15.917: INFO: Got endpoints: latency-svc-z24p4 [1.010778728s] Apr 2 22:18:15.934: INFO: Created: latency-svc-r4zkq Apr 2 22:18:15.947: INFO: Got endpoints: latency-svc-r4zkq [989.372721ms] Apr 2 22:18:15.971: INFO: Created: latency-svc-hrnnv Apr 2 22:18:15.983: INFO: Got endpoints: latency-svc-hrnnv [948.62014ms] Apr 2 22:18:16.001: INFO: Created: latency-svc-r7nhl Apr 2 22:18:16.062: INFO: Got endpoints: latency-svc-r7nhl [978.913698ms] Apr 2 22:18:16.064: INFO: Created: latency-svc-trjwh Apr 2 22:18:16.074: INFO: Got endpoints: latency-svc-trjwh [922.990006ms] Apr 2 22:18:16.115: INFO: Created: latency-svc-w7gzf Apr 2 22:18:16.128: INFO: Got endpoints: latency-svc-w7gzf [925.918858ms] Apr 2 22:18:16.157: INFO: Created: latency-svc-8g4vq Apr 2 22:18:16.193: INFO: Got endpoints: latency-svc-8g4vq [950.822674ms] Apr 2 22:18:16.223: INFO: Created: latency-svc-9t84f Apr 2 22:18:16.236: INFO: Got endpoints: latency-svc-9t84f [921.700384ms] Apr 2 22:18:16.253: INFO: Created: latency-svc-x8xql Apr 2 22:18:16.267: INFO: Got endpoints: latency-svc-x8xql [908.910603ms] Apr 2 22:18:16.288: INFO: Created: latency-svc-5zlxh Apr 2 22:18:16.385: INFO: Got endpoints: latency-svc-5zlxh [945.224846ms] Apr 2 22:18:16.391: INFO: Created: latency-svc-gxfbn Apr 2 22:18:16.406: INFO: Got endpoints: latency-svc-gxfbn [817.88066ms] Apr 2 22:18:16.429: INFO: Created: latency-svc-hqwz2 Apr 2 22:18:16.442: INFO: Got endpoints: latency-svc-hqwz2 [807.714711ms] Apr 2 22:18:16.463: INFO: Created: latency-svc-lgl84 Apr 2 22:18:16.478: INFO: Got endpoints: latency-svc-lgl84 [716.198033ms] Apr 2 22:18:16.529: INFO: Created: latency-svc-fg2nn Apr 2 22:18:16.539: INFO: Got endpoints: latency-svc-fg2nn [766.710768ms] Apr 2 22:18:16.559: INFO: Created: latency-svc-kkth5 Apr 2 22:18:16.568: INFO: Got endpoints: latency-svc-kkth5 [735.811784ms] Apr 2 22:18:16.589: INFO: Created: latency-svc-tnbgh Apr 2 22:18:16.605: INFO: Got endpoints: latency-svc-tnbgh [688.118687ms] Apr 2 22:18:16.625: INFO: Created: latency-svc-khcsj Apr 2 22:18:16.696: INFO: Got endpoints: latency-svc-khcsj [748.866899ms] Apr 2 22:18:16.714: INFO: Created: latency-svc-wqtws Apr 2 22:18:16.739: INFO: Got endpoints: latency-svc-wqtws [755.602497ms] Apr 2 22:18:16.739: INFO: Created: latency-svc-knvzb Apr 2 22:18:16.756: INFO: Got endpoints: latency-svc-knvzb [694.095841ms] Apr 2 22:18:16.781: INFO: Created: latency-svc-h9jdl Apr 2 22:18:16.792: INFO: Got endpoints: latency-svc-h9jdl [717.370585ms] Apr 2 22:18:16.852: INFO: Created: latency-svc-49pc5 Apr 2 22:18:16.855: INFO: Got endpoints: latency-svc-49pc5 [727.622865ms] Apr 2 22:18:16.889: INFO: Created: latency-svc-dgpmj Apr 2 22:18:16.907: INFO: Got endpoints: latency-svc-dgpmj [713.220006ms] Apr 2 22:18:16.931: INFO: Created: latency-svc-r9fnl Apr 2 22:18:16.943: INFO: Got endpoints: latency-svc-r9fnl [706.792145ms] Apr 2 22:18:16.990: INFO: Created: latency-svc-gk8zs Apr 2 22:18:16.994: INFO: Got endpoints: latency-svc-gk8zs [727.329258ms] Apr 2 22:18:17.020: INFO: Created: latency-svc-2m8xw Apr 2 22:18:17.033: INFO: Got endpoints: latency-svc-2m8xw [648.150445ms] Apr 2 22:18:17.057: INFO: Created: latency-svc-vxbcq Apr 2 22:18:17.070: INFO: Got endpoints: latency-svc-vxbcq [663.895674ms] Apr 2 22:18:17.133: INFO: Created: latency-svc-5wqk6 Apr 2 22:18:17.136: INFO: Got endpoints: latency-svc-5wqk6 [694.380068ms] Apr 2 22:18:17.171: INFO: Created: latency-svc-s55qp Apr 2 22:18:17.184: INFO: Got endpoints: latency-svc-s55qp [705.941392ms] Apr 2 22:18:17.207: INFO: Created: latency-svc-whckb Apr 2 22:18:17.221: INFO: Got endpoints: latency-svc-whckb [682.156868ms] Apr 2 22:18:17.265: INFO: Created: latency-svc-z2whc Apr 2 22:18:17.269: INFO: Got endpoints: latency-svc-z2whc [700.930632ms] Apr 2 22:18:17.297: INFO: Created: latency-svc-xtlj8 Apr 2 22:18:17.311: INFO: Got endpoints: latency-svc-xtlj8 [706.064717ms] Apr 2 22:18:17.339: INFO: Created: latency-svc-t4xzd Apr 2 22:18:17.353: INFO: Got endpoints: latency-svc-t4xzd [657.473857ms] Apr 2 22:18:17.399: INFO: Created: latency-svc-fx5hj Apr 2 22:18:17.414: INFO: Got endpoints: latency-svc-fx5hj [674.807443ms] Apr 2 22:18:17.435: INFO: Created: latency-svc-g6qhc Apr 2 22:18:17.445: INFO: Got endpoints: latency-svc-g6qhc [689.157335ms] Apr 2 22:18:17.464: INFO: Created: latency-svc-h4tt8 Apr 2 22:18:17.474: INFO: Got endpoints: latency-svc-h4tt8 [682.221338ms] Apr 2 22:18:17.494: INFO: Created: latency-svc-5ctz9 Apr 2 22:18:17.534: INFO: Got endpoints: latency-svc-5ctz9 [678.957437ms] Apr 2 22:18:17.549: INFO: Created: latency-svc-bhx97 Apr 2 22:18:17.565: INFO: Got endpoints: latency-svc-bhx97 [658.028969ms] Apr 2 22:18:17.585: INFO: Created: latency-svc-g5s2h Apr 2 22:18:17.601: INFO: Got endpoints: latency-svc-g5s2h [657.642463ms] Apr 2 22:18:17.626: INFO: Created: latency-svc-4zr2h Apr 2 22:18:17.666: INFO: Got endpoints: latency-svc-4zr2h [672.143666ms] Apr 2 22:18:17.680: INFO: Created: latency-svc-ncfnq Apr 2 22:18:17.691: INFO: Got endpoints: latency-svc-ncfnq [658.392414ms] Apr 2 22:18:17.717: INFO: Created: latency-svc-gs6s9 Apr 2 22:18:17.734: INFO: Got endpoints: latency-svc-gs6s9 [664.677999ms] Apr 2 22:18:17.804: INFO: Created: latency-svc-cqnjk Apr 2 22:18:17.836: INFO: Created: latency-svc-z9nfx Apr 2 22:18:17.836: INFO: Got endpoints: latency-svc-cqnjk [699.991951ms] Apr 2 22:18:17.866: INFO: Got endpoints: latency-svc-z9nfx [681.630653ms] Apr 2 22:18:17.948: INFO: Created: latency-svc-pc7tn Apr 2 22:18:17.950: INFO: Got endpoints: latency-svc-pc7tn [729.194025ms] Apr 2 22:18:17.974: INFO: Created: latency-svc-t6jf7 Apr 2 22:18:17.998: INFO: Got endpoints: latency-svc-t6jf7 [728.660347ms] Apr 2 22:18:18.047: INFO: Created: latency-svc-wnxkj Apr 2 22:18:18.104: INFO: Got endpoints: latency-svc-wnxkj [793.345695ms] Apr 2 22:18:18.132: INFO: Created: latency-svc-9r47p Apr 2 22:18:18.155: INFO: Got endpoints: latency-svc-9r47p [801.535471ms] Apr 2 22:18:18.178: INFO: Created: latency-svc-79xx9 Apr 2 22:18:18.241: INFO: Got endpoints: latency-svc-79xx9 [827.481129ms] Apr 2 22:18:18.244: INFO: Created: latency-svc-ppn8t Apr 2 22:18:18.255: INFO: Got endpoints: latency-svc-ppn8t [809.707978ms] Apr 2 22:18:18.280: INFO: Created: latency-svc-ptgjz Apr 2 22:18:18.305: INFO: Got endpoints: latency-svc-ptgjz [831.137992ms] Apr 2 22:18:18.329: INFO: Created: latency-svc-jjzcf Apr 2 22:18:18.379: INFO: Got endpoints: latency-svc-jjzcf [844.221749ms] Apr 2 22:18:18.406: INFO: Created: latency-svc-v9h5t Apr 2 22:18:18.436: INFO: Got endpoints: latency-svc-v9h5t [871.263045ms] Apr 2 22:18:18.467: INFO: Created: latency-svc-8j9q9 Apr 2 22:18:18.553: INFO: Got endpoints: latency-svc-8j9q9 [951.504097ms] Apr 2 22:18:18.555: INFO: Created: latency-svc-r2t72 Apr 2 22:18:18.565: INFO: Got endpoints: latency-svc-r2t72 [898.78869ms] Apr 2 22:18:18.586: INFO: Created: latency-svc-zp2kn Apr 2 22:18:18.606: INFO: Got endpoints: latency-svc-zp2kn [914.765846ms] Apr 2 22:18:18.623: INFO: Created: latency-svc-5v56z Apr 2 22:18:18.631: INFO: Got endpoints: latency-svc-5v56z [896.997232ms] Apr 2 22:18:18.653: INFO: Created: latency-svc-k4q9r Apr 2 22:18:18.715: INFO: Created: latency-svc-h6jxf Apr 2 22:18:18.722: INFO: Got endpoints: latency-svc-k4q9r [885.265981ms] Apr 2 22:18:18.722: INFO: Got endpoints: latency-svc-h6jxf [855.805923ms] Apr 2 22:18:18.743: INFO: Created: latency-svc-m6x55 Apr 2 22:18:18.758: INFO: Got endpoints: latency-svc-m6x55 [808.141826ms] Apr 2 22:18:18.790: INFO: Created: latency-svc-xkkx6 Apr 2 22:18:18.806: INFO: Got endpoints: latency-svc-xkkx6 [808.211413ms] Apr 2 22:18:18.852: INFO: Created: latency-svc-bw4fr Apr 2 22:18:18.861: INFO: Got endpoints: latency-svc-bw4fr [756.695256ms] Apr 2 22:18:18.887: INFO: Created: latency-svc-8dczm Apr 2 22:18:18.904: INFO: Got endpoints: latency-svc-8dczm [748.599427ms] Apr 2 22:18:18.929: INFO: Created: latency-svc-fc62n Apr 2 22:18:18.946: INFO: Got endpoints: latency-svc-fc62n [704.71839ms] Apr 2 22:18:18.996: INFO: Created: latency-svc-srwnn Apr 2 22:18:19.006: INFO: Got endpoints: latency-svc-srwnn [750.944772ms] Apr 2 22:18:19.026: INFO: Created: latency-svc-lnp78 Apr 2 22:18:19.036: INFO: Got endpoints: latency-svc-lnp78 [730.846734ms] Apr 2 22:18:19.073: INFO: Created: latency-svc-vbp44 Apr 2 22:18:19.091: INFO: Got endpoints: latency-svc-vbp44 [711.793684ms] Apr 2 22:18:19.133: INFO: Created: latency-svc-5cj9v Apr 2 22:18:19.139: INFO: Got endpoints: latency-svc-5cj9v [702.744696ms] Apr 2 22:18:19.163: INFO: Created: latency-svc-jtpc4 Apr 2 22:18:19.175: INFO: Got endpoints: latency-svc-jtpc4 [622.511118ms] Apr 2 22:18:19.194: INFO: Created: latency-svc-rcnn7 Apr 2 22:18:19.205: INFO: Got endpoints: latency-svc-rcnn7 [640.194514ms] Apr 2 22:18:19.222: INFO: Created: latency-svc-tl7hl Apr 2 22:18:19.277: INFO: Got endpoints: latency-svc-tl7hl [670.485772ms] Apr 2 22:18:19.280: INFO: Created: latency-svc-fjshp Apr 2 22:18:19.283: INFO: Got endpoints: latency-svc-fjshp [652.063941ms] Apr 2 22:18:19.306: INFO: Created: latency-svc-q6bgk Apr 2 22:18:19.320: INFO: Got endpoints: latency-svc-q6bgk [598.479533ms] Apr 2 22:18:19.349: INFO: Created: latency-svc-zrhfr Apr 2 22:18:19.368: INFO: Got endpoints: latency-svc-zrhfr [646.280157ms] Apr 2 22:18:19.440: INFO: Created: latency-svc-7dzfv Apr 2 22:18:19.441: INFO: Got endpoints: latency-svc-7dzfv [682.886337ms] Apr 2 22:18:19.468: INFO: Created: latency-svc-h6twn Apr 2 22:18:19.483: INFO: Got endpoints: latency-svc-h6twn [677.017531ms] Apr 2 22:18:19.504: INFO: Created: latency-svc-mzxlm Apr 2 22:18:19.513: INFO: Got endpoints: latency-svc-mzxlm [652.434919ms] Apr 2 22:18:19.535: INFO: Created: latency-svc-m9khx Apr 2 22:18:19.601: INFO: Got endpoints: latency-svc-m9khx [696.97354ms] Apr 2 22:18:19.603: INFO: Created: latency-svc-q4g9j Apr 2 22:18:19.610: INFO: Got endpoints: latency-svc-q4g9j [664.141177ms] Apr 2 22:18:19.661: INFO: Created: latency-svc-r2rdz Apr 2 22:18:19.690: INFO: Got endpoints: latency-svc-r2rdz [684.538568ms] Apr 2 22:18:19.802: INFO: Created: latency-svc-s68lx Apr 2 22:18:19.814: INFO: Got endpoints: latency-svc-s68lx [778.143385ms] Apr 2 22:18:19.847: INFO: Created: latency-svc-8tm9t Apr 2 22:18:19.888: INFO: Got endpoints: latency-svc-8tm9t [797.818531ms] Apr 2 22:18:19.900: INFO: Created: latency-svc-8f6g6 Apr 2 22:18:19.917: INFO: Got endpoints: latency-svc-8f6g6 [778.38956ms] Apr 2 22:18:19.943: INFO: Created: latency-svc-4cwtn Apr 2 22:18:19.959: INFO: Got endpoints: latency-svc-4cwtn [784.089474ms] Apr 2 22:18:20.025: INFO: Created: latency-svc-5tdpl Apr 2 22:18:20.039: INFO: Got endpoints: latency-svc-5tdpl [833.311224ms] Apr 2 22:18:20.068: INFO: Created: latency-svc-hfxbm Apr 2 22:18:20.098: INFO: Got endpoints: latency-svc-hfxbm [821.532639ms] Apr 2 22:18:20.188: INFO: Created: latency-svc-vmvvq Apr 2 22:18:20.213: INFO: Got endpoints: latency-svc-vmvvq [929.471211ms] Apr 2 22:18:20.214: INFO: Created: latency-svc-dzh2n Apr 2 22:18:20.242: INFO: Got endpoints: latency-svc-dzh2n [921.721767ms] Apr 2 22:18:20.272: INFO: Created: latency-svc-npjv9 Apr 2 22:18:20.284: INFO: Got endpoints: latency-svc-npjv9 [915.831016ms] Apr 2 22:18:20.343: INFO: Created: latency-svc-kv7fs Apr 2 22:18:20.369: INFO: Got endpoints: latency-svc-kv7fs [927.617341ms] Apr 2 22:18:20.405: INFO: Created: latency-svc-87gms Apr 2 22:18:20.434: INFO: Got endpoints: latency-svc-87gms [950.56109ms] Apr 2 22:18:20.481: INFO: Created: latency-svc-b4dcp Apr 2 22:18:20.500: INFO: Got endpoints: latency-svc-b4dcp [986.473188ms] Apr 2 22:18:20.531: INFO: Created: latency-svc-8r6nq Apr 2 22:18:20.543: INFO: Got endpoints: latency-svc-8r6nq [942.46832ms] Apr 2 22:18:20.567: INFO: Created: latency-svc-4tnzr Apr 2 22:18:20.579: INFO: Got endpoints: latency-svc-4tnzr [969.118638ms] Apr 2 22:18:20.624: INFO: Created: latency-svc-rwwng Apr 2 22:18:20.634: INFO: Got endpoints: latency-svc-rwwng [943.407507ms] Apr 2 22:18:20.656: INFO: Created: latency-svc-pkltw Apr 2 22:18:20.671: INFO: Got endpoints: latency-svc-pkltw [856.697157ms] Apr 2 22:18:20.692: INFO: Created: latency-svc-6tw94 Apr 2 22:18:20.706: INFO: Got endpoints: latency-svc-6tw94 [817.948337ms] Apr 2 22:18:20.770: INFO: Created: latency-svc-mxvdh Apr 2 22:18:20.774: INFO: Got endpoints: latency-svc-mxvdh [856.827632ms] Apr 2 22:18:20.843: INFO: Created: latency-svc-4kn2h Apr 2 22:18:20.857: INFO: Got endpoints: latency-svc-4kn2h [897.678848ms] Apr 2 22:18:20.906: INFO: Created: latency-svc-zz4p9 Apr 2 22:18:20.911: INFO: Got endpoints: latency-svc-zz4p9 [872.179948ms] Apr 2 22:18:20.932: INFO: Created: latency-svc-m44rt Apr 2 22:18:20.941: INFO: Got endpoints: latency-svc-m44rt [843.068568ms] Apr 2 22:18:20.963: INFO: Created: latency-svc-96vt9 Apr 2 22:18:20.978: INFO: Got endpoints: latency-svc-96vt9 [764.723669ms] Apr 2 22:18:21.000: INFO: Created: latency-svc-6fhbp Apr 2 22:18:21.032: INFO: Got endpoints: latency-svc-6fhbp [789.607567ms] Apr 2 22:18:21.047: INFO: Created: latency-svc-d29m5 Apr 2 22:18:21.062: INFO: Got endpoints: latency-svc-d29m5 [778.156707ms] Apr 2 22:18:21.082: INFO: Created: latency-svc-2gdzd Apr 2 22:18:21.099: INFO: Got endpoints: latency-svc-2gdzd [729.74319ms] Apr 2 22:18:21.125: INFO: Created: latency-svc-vv9sg Apr 2 22:18:21.169: INFO: Got endpoints: latency-svc-vv9sg [734.831624ms] Apr 2 22:18:21.170: INFO: Created: latency-svc-r24fg Apr 2 22:18:21.183: INFO: Got endpoints: latency-svc-r24fg [682.918678ms] Apr 2 22:18:21.209: INFO: Created: latency-svc-rxhgr Apr 2 22:18:21.226: INFO: Got endpoints: latency-svc-rxhgr [682.396248ms] Apr 2 22:18:21.251: INFO: Created: latency-svc-tb86q Apr 2 22:18:21.268: INFO: Got endpoints: latency-svc-tb86q [688.663683ms] Apr 2 22:18:21.313: INFO: Created: latency-svc-6p7dn Apr 2 22:18:21.322: INFO: Got endpoints: latency-svc-6p7dn [688.275612ms] Apr 2 22:18:21.346: INFO: Created: latency-svc-dwcjm Apr 2 22:18:21.364: INFO: Got endpoints: latency-svc-dwcjm [693.375368ms] Apr 2 22:18:21.388: INFO: Created: latency-svc-d4l8d Apr 2 22:18:21.401: INFO: Got endpoints: latency-svc-d4l8d [694.13545ms] Apr 2 22:18:21.455: INFO: Created: latency-svc-b4ws9 Apr 2 22:18:21.479: INFO: Got endpoints: latency-svc-b4ws9 [704.831882ms] Apr 2 22:18:21.497: INFO: Created: latency-svc-f56sr Apr 2 22:18:21.509: INFO: Got endpoints: latency-svc-f56sr [652.378712ms] Apr 2 22:18:21.533: INFO: Created: latency-svc-q5qqj Apr 2 22:18:21.546: INFO: Got endpoints: latency-svc-q5qqj [634.981381ms] Apr 2 22:18:21.587: INFO: Created: latency-svc-smx46 Apr 2 22:18:21.600: INFO: Got endpoints: latency-svc-smx46 [658.123497ms] Apr 2 22:18:21.623: INFO: Created: latency-svc-2bn2j Apr 2 22:18:21.636: INFO: Got endpoints: latency-svc-2bn2j [658.062271ms] Apr 2 22:18:21.658: INFO: Created: latency-svc-7cb8c Apr 2 22:18:21.672: INFO: Got endpoints: latency-svc-7cb8c [640.644071ms] Apr 2 22:18:21.732: INFO: Created: latency-svc-586lw Apr 2 22:18:21.745: INFO: Got endpoints: latency-svc-586lw [682.171507ms] Apr 2 22:18:21.815: INFO: Created: latency-svc-cbzxd Apr 2 22:18:21.829: INFO: Got endpoints: latency-svc-cbzxd [730.178078ms] Apr 2 22:18:21.882: INFO: Created: latency-svc-ghf9j Apr 2 22:18:21.889: INFO: Got endpoints: latency-svc-ghf9j [719.611901ms] Apr 2 22:18:21.910: INFO: Created: latency-svc-w5852 Apr 2 22:18:21.938: INFO: Got endpoints: latency-svc-w5852 [754.796766ms] Apr 2 22:18:22.057: INFO: Created: latency-svc-ph4zf Apr 2 22:18:22.096: INFO: Got endpoints: latency-svc-ph4zf [870.421489ms] Apr 2 22:18:22.096: INFO: Created: latency-svc-rllhg Apr 2 22:18:22.118: INFO: Got endpoints: latency-svc-rllhg [849.638818ms] Apr 2 22:18:22.144: INFO: Created: latency-svc-jmscn Apr 2 22:18:22.154: INFO: Got endpoints: latency-svc-jmscn [831.444348ms] Apr 2 22:18:22.217: INFO: Created: latency-svc-pgvfn Apr 2 22:18:22.232: INFO: Got endpoints: latency-svc-pgvfn [867.461387ms] Apr 2 22:18:22.260: INFO: Created: latency-svc-bchf5 Apr 2 22:18:22.274: INFO: Got endpoints: latency-svc-bchf5 [873.63796ms] Apr 2 22:18:22.300: INFO: Created: latency-svc-gtnxd Apr 2 22:18:22.310: INFO: Got endpoints: latency-svc-gtnxd [831.385005ms] Apr 2 22:18:22.355: INFO: Created: latency-svc-znx6r Apr 2 22:18:22.364: INFO: Got endpoints: latency-svc-znx6r [855.000764ms] Apr 2 22:18:22.391: INFO: Created: latency-svc-gpkn7 Apr 2 22:18:22.408: INFO: Got endpoints: latency-svc-gpkn7 [861.839951ms] Apr 2 22:18:22.433: INFO: Created: latency-svc-frglk Apr 2 22:18:22.450: INFO: Got endpoints: latency-svc-frglk [850.079538ms] Apr 2 22:18:22.506: INFO: Created: latency-svc-fmskf Apr 2 22:18:22.510: INFO: Got endpoints: latency-svc-fmskf [873.749702ms] Apr 2 22:18:22.540: INFO: Created: latency-svc-2rbmx Apr 2 22:18:22.552: INFO: Got endpoints: latency-svc-2rbmx [879.519177ms] Apr 2 22:18:22.580: INFO: Created: latency-svc-dvgmc Apr 2 22:18:22.588: INFO: Got endpoints: latency-svc-dvgmc [843.651943ms] Apr 2 22:18:22.648: INFO: Created: latency-svc-s7mr7 Apr 2 22:18:22.654: INFO: Got endpoints: latency-svc-s7mr7 [825.191607ms] Apr 2 22:18:22.673: INFO: Created: latency-svc-mltzt Apr 2 22:18:22.685: INFO: Got endpoints: latency-svc-mltzt [796.091756ms] Apr 2 22:18:22.703: INFO: Created: latency-svc-jzmkq Apr 2 22:18:22.715: INFO: Got endpoints: latency-svc-jzmkq [777.056338ms] Apr 2 22:18:22.732: INFO: Created: latency-svc-djs4r Apr 2 22:18:22.745: INFO: Got endpoints: latency-svc-djs4r [649.240174ms] Apr 2 22:18:22.804: INFO: Created: latency-svc-ntqld Apr 2 22:18:22.811: INFO: Got endpoints: latency-svc-ntqld [693.766512ms] Apr 2 22:18:22.847: INFO: Created: latency-svc-sdzjh Apr 2 22:18:22.866: INFO: Got endpoints: latency-svc-sdzjh [712.606482ms] Apr 2 22:18:22.889: INFO: Created: latency-svc-pzqvv Apr 2 22:18:22.902: INFO: Got endpoints: latency-svc-pzqvv [670.216826ms] Apr 2 22:18:22.947: INFO: Created: latency-svc-whnpx Apr 2 22:18:22.957: INFO: Got endpoints: latency-svc-whnpx [682.848592ms] Apr 2 22:18:23.002: INFO: Created: latency-svc-7xdmb Apr 2 22:18:23.029: INFO: Got endpoints: latency-svc-7xdmb [718.384275ms] Apr 2 22:18:23.092: INFO: Created: latency-svc-6dzmz Apr 2 22:18:23.116: INFO: Got endpoints: latency-svc-6dzmz [751.549584ms] Apr 2 22:18:23.116: INFO: Created: latency-svc-ttmts Apr 2 22:18:23.158: INFO: Got endpoints: latency-svc-ttmts [750.213775ms] Apr 2 22:18:23.235: INFO: Created: latency-svc-zvrv8 Apr 2 22:18:23.266: INFO: Got endpoints: latency-svc-zvrv8 [816.592978ms] Apr 2 22:18:23.267: INFO: Created: latency-svc-5q6gx Apr 2 22:18:23.298: INFO: Got endpoints: latency-svc-5q6gx [787.854292ms] Apr 2 22:18:23.327: INFO: Created: latency-svc-6h57t Apr 2 22:18:23.378: INFO: Got endpoints: latency-svc-6h57t [826.531363ms] Apr 2 22:18:23.411: INFO: Created: latency-svc-s79vq Apr 2 22:18:23.426: INFO: Got endpoints: latency-svc-s79vq [837.882317ms] Apr 2 22:18:23.446: INFO: Created: latency-svc-5x5sm Apr 2 22:18:23.464: INFO: Got endpoints: latency-svc-5x5sm [809.378141ms] Apr 2 22:18:23.517: INFO: Created: latency-svc-4gqbb Apr 2 22:18:23.519: INFO: Got endpoints: latency-svc-4gqbb [834.283493ms] Apr 2 22:18:23.548: INFO: Created: latency-svc-bxwtj Apr 2 22:18:23.565: INFO: Got endpoints: latency-svc-bxwtj [850.475308ms] Apr 2 22:18:23.590: INFO: Created: latency-svc-sv9hs Apr 2 22:18:23.602: INFO: Got endpoints: latency-svc-sv9hs [856.303628ms] Apr 2 22:18:23.602: INFO: Latencies: [77.884017ms 149.16021ms 217.93818ms 277.998629ms 379.715299ms 436.936309ms 477.715402ms 551.486755ms 592.389661ms 598.479533ms 622.511118ms 634.981381ms 640.194514ms 640.644071ms 646.280157ms 648.150445ms 649.240174ms 652.063941ms 652.378712ms 652.434919ms 657.473857ms 657.642463ms 658.028969ms 658.062271ms 658.123497ms 658.392414ms 663.895674ms 664.141177ms 664.677999ms 670.216826ms 670.485772ms 672.143666ms 672.553743ms 674.807443ms 677.017531ms 678.957437ms 681.630653ms 682.156868ms 682.171507ms 682.221338ms 682.396248ms 682.848592ms 682.886337ms 682.918678ms 684.538568ms 688.118687ms 688.275612ms 688.663683ms 689.157335ms 693.375368ms 693.766512ms 694.095841ms 694.13545ms 694.380068ms 696.97354ms 699.991951ms 700.930632ms 702.744696ms 704.71839ms 704.831882ms 705.941392ms 706.064717ms 706.792145ms 711.793684ms 712.606482ms 713.220006ms 716.198033ms 717.370585ms 718.384275ms 719.611901ms 727.329258ms 727.622865ms 728.660347ms 729.194025ms 729.74319ms 730.178078ms 730.846734ms 734.831624ms 735.811784ms 742.352152ms 748.599427ms 748.866899ms 750.213775ms 750.944772ms 751.549584ms 754.796766ms 755.602497ms 756.695256ms 764.723669ms 766.710768ms 777.056338ms 778.143385ms 778.156707ms 778.38956ms 784.089474ms 787.854292ms 789.607567ms 793.345695ms 796.091756ms 797.818531ms 800.632916ms 801.535471ms 807.714711ms 808.141826ms 808.211413ms 809.378141ms 809.707978ms 816.592978ms 817.88066ms 817.948337ms 821.532639ms 825.191607ms 826.531363ms 827.481129ms 831.137992ms 831.385005ms 831.444348ms 833.311224ms 834.283493ms 837.882317ms 843.068568ms 843.651943ms 844.221749ms 849.638818ms 850.079538ms 850.475308ms 855.000764ms 855.805923ms 856.303628ms 856.697157ms 856.827632ms 861.839951ms 862.674486ms 862.889057ms 867.461387ms 867.770971ms 870.421489ms 871.263045ms 872.179948ms 873.419739ms 873.63796ms 873.749702ms 879.519177ms 884.795571ms 885.265981ms 896.997232ms 897.678848ms 898.78869ms 906.229508ms 908.910603ms 914.765846ms 915.831016ms 921.700384ms 921.721767ms 922.990006ms 925.918858ms 927.617341ms 929.471211ms 934.965135ms 942.46832ms 943.407507ms 945.224846ms 948.62014ms 950.56109ms 950.822674ms 951.504097ms 961.489891ms 969.118638ms 970.73259ms 978.913698ms 982.390064ms 983.265249ms 983.997026ms 986.473188ms 989.372721ms 993.780713ms 996.218539ms 1.010390986s 1.010778728s 1.023513059s 1.023552569s 1.029600206s 1.044342618s 1.044657964s 1.054035585s 1.09967875s 1.103378944s 1.112264956s 1.140026011s 1.155660569s 1.181294379s 1.185717084s 1.206322152s 1.220096094s 1.220926172s 1.236530231s 1.244486989s 1.250490916s 1.252948801s 1.260317725s] Apr 2 22:18:23.602: INFO: 50 %ile: 800.632916ms Apr 2 22:18:23.602: INFO: 90 %ile: 1.023552569s Apr 2 22:18:23.602: INFO: 99 %ile: 1.252948801s Apr 2 22:18:23.602: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:18:23.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-746" for this suite. • [SLOW TEST:15.429 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":260,"skipped":4267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:18:23.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9332, will wait for the garbage collector to delete the pods Apr 2 22:18:27.760: INFO: Deleting Job.batch foo took: 16.939287ms Apr 2 22:18:27.860: INFO: Terminating Job.batch foo pods took: 100.26287ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:19:09.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9332" for this suite. • [SLOW TEST:45.657 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":261,"skipped":4291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:19:09.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-cdf97a85-20c7-44b1-baf0-7a07d404ca18 STEP: Creating secret with name secret-projected-all-test-volume-ae783acc-b99f-441d-b0c8-ccd1b197161a STEP: Creating a pod to test Check all projections for projected volume plugin Apr 2 22:19:09.394: INFO: Waiting up to 5m0s for pod "projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e" in namespace "projected-409" to be "success or failure" Apr 2 22:19:09.398: INFO: Pod "projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40507ms Apr 2 22:19:11.401: INFO: Pod "projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007665064s Apr 2 22:19:13.406: INFO: Pod "projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012117008s STEP: Saw pod success Apr 2 22:19:13.406: INFO: Pod "projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e" satisfied condition "success or failure" Apr 2 22:19:13.409: INFO: Trying to get logs from node jerma-worker pod projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e container projected-all-volume-test: STEP: delete the pod Apr 2 22:19:13.453: INFO: Waiting for pod projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e to disappear Apr 2 22:19:13.465: INFO: Pod projected-volume-4dab14a1-7b77-4620-9fac-59a1f399295e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:19:13.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-409" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:19:13.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 22:19:17.592: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:19:17.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-396" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4335,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:19:17.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b82894fe-7649-40d9-bc8c-e14d9d0e7e7a STEP: Creating a pod to test consume configMaps Apr 2 22:19:17.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd" in namespace "configmap-2221" to be "success or failure" Apr 2 22:19:17.723: INFO: Pod "pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385489ms Apr 2 22:19:19.726: INFO: Pod "pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007566258s Apr 2 22:19:21.731: INFO: Pod "pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01225174s STEP: Saw pod success Apr 2 22:19:21.731: INFO: Pod "pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd" satisfied condition "success or failure" Apr 2 22:19:21.738: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd container configmap-volume-test: STEP: delete the pod Apr 2 22:19:21.773: INFO: Waiting for pod pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd to disappear Apr 2 22:19:21.789: INFO: Pod pod-configmaps-ed9171ec-1cc0-41a5-9a77-a36df16d6cdd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:19:21.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2221" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:19:21.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:19:41.910: INFO: Container started at 2020-04-02 22:19:24 +0000 UTC, pod became ready at 2020-04-02 22:19:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:19:41.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4513" for this suite. • [SLOW TEST:20.121 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4362,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:19:41.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:19:42.005: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 2 22:19:47.011: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 2 22:19:47.011: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 2 22:19:49.015: INFO: Creating deployment "test-rollover-deployment" Apr 2 22:19:49.029: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 2 22:19:51.037: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 2 22:19:51.043: INFO: Ensure that both replica sets have 1 created replica Apr 2 22:19:51.049: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 2 22:19:51.056: INFO: Updating deployment test-rollover-deployment Apr 2 22:19:51.056: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 2 22:19:53.086: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 2 22:19:53.093: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 2 22:19:53.099: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:19:53.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462791, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:19:55.107: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:19:55.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462794, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:19:57.107: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:19:57.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462794, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:19:59.106: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:19:59.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462794, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:20:01.107: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:20:01.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462794, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:20:03.107: INFO: all replica sets need to contain the pod-template-hash label Apr 2 22:20:03.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462794, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462789, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 22:20:05.107: INFO: Apr 2 22:20:05.107: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 2 22:20:05.116: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2167 /apis/apps/v1/namespaces/deployment-2167/deployments/test-rollover-deployment 936ace06-73be-48ea-b47f-f789b8f7fa64 4869802 2 2020-04-02 22:19:49 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b77488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-02 22:19:49 +0000 UTC,LastTransitionTime:2020-04-02 22:19:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-02 22:20:04 +0000 UTC,LastTransitionTime:2020-04-02 22:19:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 2 22:20:05.118: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-2167 /apis/apps/v1/namespaces/deployment-2167/replicasets/test-rollover-deployment-574d6dfbff 61d61ca8-180f-4ea6-b90b-7ecd7de29163 4869791 2 2020-04-02 22:19:51 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 936ace06-73be-48ea-b47f-f789b8f7fa64 0xc00451cc07 0xc00451cc08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00451cc78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 2 22:20:05.118: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 2 22:20:05.119: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2167 /apis/apps/v1/namespaces/deployment-2167/replicasets/test-rollover-controller af51e471-6b71-484b-bad8-b694b33df5d9 4869800 2 2020-04-02 22:19:41 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 936ace06-73be-48ea-b47f-f789b8f7fa64 0xc00451ca4f 0xc00451ca60}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00451cb98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 22:20:05.119: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2167 /apis/apps/v1/namespaces/deployment-2167/replicasets/test-rollover-deployment-f6c94f66c f0730ccf-3d88-4293-9150-9be5543b62e4 4869739 2 2020-04-02 22:19:49 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 936ace06-73be-48ea-b47f-f789b8f7fa64 0xc00451cce0 0xc00451cce1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00451cd58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 22:20:05.122: INFO: Pod "test-rollover-deployment-574d6dfbff-5mcb8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5mcb8 test-rollover-deployment-574d6dfbff- deployment-2167 /api/v1/namespaces/deployment-2167/pods/test-rollover-deployment-574d6dfbff-5mcb8 b13f8d7b-4d3b-47d0-827d-041ff2010ae3 4869759 0 2020-04-02 22:19:51 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 61d61ca8-180f-4ea6-b90b-7ecd7de29163 0xc00451d2e7 0xc00451d2e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lzmsl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lzmsl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lzmsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:19:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.67,StartTime:2020-04-02 22:19:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 22:19:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://f51a1489b3b2c2c26754e088fef17d2fbe1dcaecf6424cc2406e3ef9478e1f5e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:20:05.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2167" for this suite. • [SLOW TEST:23.210 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":266,"skipped":4363,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:20:05.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1634 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 2 22:20:05.435: INFO: Found 0 stateful pods, waiting for 3 Apr 2 22:20:15.440: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:20:15.440: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:20:15.440: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 2 22:20:15.463: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 2 22:20:25.498: INFO: Updating stateful set ss2 Apr 2 22:20:25.508: INFO: Waiting for Pod statefulset-1634/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 2 22:20:35.635: INFO: Found 2 stateful pods, waiting for 3 Apr 2 22:20:45.640: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:20:45.640: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 22:20:45.640: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 2 22:20:45.664: INFO: Updating stateful set ss2 Apr 2 22:20:45.678: INFO: Waiting for Pod statefulset-1634/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 22:20:55.702: INFO: Updating stateful set ss2 Apr 2 22:20:55.712: INFO: Waiting for StatefulSet statefulset-1634/ss2 to complete update Apr 2 22:20:55.712: INFO: Waiting for Pod statefulset-1634/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 2 22:21:05.719: INFO: Deleting all statefulset in ns statefulset-1634 Apr 2 22:21:05.722: INFO: Scaling statefulset ss2 to 0 Apr 2 22:21:25.761: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 22:21:25.764: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:21:25.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1634" for this suite. • [SLOW TEST:80.678 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":267,"skipped":4380,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:21:25.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 2 22:21:25.871: INFO: Waiting up to 5m0s for pod "pod-96442fd7-5009-4fba-97d1-3ded7c938f2e" in namespace "emptydir-9252" to be "success or failure" Apr 2 22:21:25.874: INFO: Pod "pod-96442fd7-5009-4fba-97d1-3ded7c938f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46101ms Apr 2 22:21:27.938: INFO: Pod "pod-96442fd7-5009-4fba-97d1-3ded7c938f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067137973s Apr 2 22:21:29.942: INFO: Pod "pod-96442fd7-5009-4fba-97d1-3ded7c938f2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070951855s STEP: Saw pod success Apr 2 22:21:29.942: INFO: Pod "pod-96442fd7-5009-4fba-97d1-3ded7c938f2e" satisfied condition "success or failure" Apr 2 22:21:29.944: INFO: Trying to get logs from node jerma-worker pod pod-96442fd7-5009-4fba-97d1-3ded7c938f2e container test-container: STEP: delete the pod Apr 2 22:21:29.983: INFO: Waiting for pod pod-96442fd7-5009-4fba-97d1-3ded7c938f2e to disappear Apr 2 22:21:29.988: INFO: Pod pod-96442fd7-5009-4fba-97d1-3ded7c938f2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:21:29.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9252" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:21:29.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 22:21:34.121: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:21:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6393" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4407,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:21:34.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-pqtr STEP: Creating a pod to test atomic-volume-subpath Apr 2 22:21:34.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pqtr" in namespace "subpath-5848" to be "success or failure" Apr 2 22:21:34.283: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615072ms Apr 2 22:21:36.287: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007736762s Apr 2 22:21:38.291: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 4.011823514s Apr 2 22:21:40.295: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 6.015889995s Apr 2 22:21:42.315: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 8.035972719s Apr 2 22:21:44.320: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 10.040372143s Apr 2 22:21:46.324: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 12.044506101s Apr 2 22:21:48.327: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 14.048164507s Apr 2 22:21:50.332: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 16.052855539s Apr 2 22:21:52.336: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 18.056904433s Apr 2 22:21:54.340: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 20.060820596s Apr 2 22:21:56.344: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Running", Reason="", readiness=true. Elapsed: 22.065012608s Apr 2 22:21:58.348: INFO: Pod "pod-subpath-test-projected-pqtr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06859717s STEP: Saw pod success Apr 2 22:21:58.348: INFO: Pod "pod-subpath-test-projected-pqtr" satisfied condition "success or failure" Apr 2 22:21:58.351: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-pqtr container test-container-subpath-projected-pqtr: STEP: delete the pod Apr 2 22:21:58.395: INFO: Waiting for pod pod-subpath-test-projected-pqtr to disappear Apr 2 22:21:58.407: INFO: Pod pod-subpath-test-projected-pqtr no longer exists STEP: Deleting pod pod-subpath-test-projected-pqtr Apr 2 22:21:58.407: INFO: Deleting pod "pod-subpath-test-projected-pqtr" in namespace "subpath-5848" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:21:58.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5848" for this suite. • [SLOW TEST:24.257 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":270,"skipped":4425,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:21:58.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 2 22:21:58.572: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 2 22:22:03.576: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 2 22:22:03.576: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 2 22:22:03.593: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7737 /apis/apps/v1/namespaces/deployment-7737/deployments/test-cleanup-deployment 5d06e26c-bb4d-4d60-ac01-b18fc689a014 4870541 1 2020-04-02 22:22:03 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004bd2b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 2 22:22:03.620: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7737 /apis/apps/v1/namespaces/deployment-7737/replicasets/test-cleanup-deployment-55ffc6b7b6 e7cba71a-1a04-4a4d-af96-ff1bce0bcb4f 4870543 1 2020-04-02 22:22:03 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 5d06e26c-bb4d-4d60-ac01-b18fc689a014 0xc004bd3097 0xc004bd3098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004bd3108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 22:22:03.620: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 2 22:22:03.620: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7737 /apis/apps/v1/namespaces/deployment-7737/replicasets/test-cleanup-controller 5121ec3c-9982-4e2f-bd28-1bf2aa450f85 4870542 1 2020-04-02 22:21:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 5d06e26c-bb4d-4d60-ac01-b18fc689a014 0xc004bd2fc7 0xc004bd2fc8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004bd3028 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 2 22:22:03.672: INFO: Pod "test-cleanup-controller-crrsn" is available: &Pod{ObjectMeta:{test-cleanup-controller-crrsn test-cleanup-controller- deployment-7737 /api/v1/namespaces/deployment-7737/pods/test-cleanup-controller-crrsn c6d18676-1655-45a1-b55d-07928ea5f61f 4870523 0 2020-04-02 22:21:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5121ec3c-9982-4e2f-bd28-1bf2aa450f85 0xc0038acee7 0xc0038acee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xbtl8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xbtl8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xbtl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:22:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:22:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:21:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.74,StartTime:2020-04-02 22:21:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 22:22:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3f5679e5f8a5920e94fb35ee5cdf70b4d66b82ad6b3d44492a28e98c2bd7cd2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 22:22:03.672: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-shq8s" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-shq8s test-cleanup-deployment-55ffc6b7b6- deployment-7737 /api/v1/namespaces/deployment-7737/pods/test-cleanup-deployment-55ffc6b7b6-shq8s b229b58c-a07f-4b38-adfd-a74ea1003275 4870549 0 2020-04-02 22:22:03 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 e7cba71a-1a04-4a4d-af96-ff1bce0bcb4f 0xc0038ad087 0xc0038ad088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xbtl8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xbtl8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xbtl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 22:22:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:03.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7737" for this suite. • [SLOW TEST:5.261 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":271,"skipped":4434,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:03.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-af57dbc4-8188-4c23-bc1e-27403049ffff [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:03.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7205" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":272,"skipped":4454,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:03.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 22:22:04.382: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 22:22:06.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462924, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462924, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462924, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462924, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:22:09.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:09.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6237" for this suite. STEP: Destroying namespace "webhook-6237-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.950 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":273,"skipped":4462,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:09.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 2 22:22:09.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c" in namespace "downward-api-6473" to be "success or failure" Apr 2 22:22:09.921: INFO: Pod "downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.3045ms Apr 2 22:22:11.924: INFO: Pod "downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025697912s Apr 2 22:22:13.937: INFO: Pod "downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038603373s STEP: Saw pod success Apr 2 22:22:13.937: INFO: Pod "downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c" satisfied condition "success or failure" Apr 2 22:22:13.939: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c container client-container: STEP: delete the pod Apr 2 22:22:13.968: INFO: Waiting for pod downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c to disappear Apr 2 22:22:13.973: INFO: Pod downwardapi-volume-e5977e2d-3ea3-4da6-9d43-000b74aaa91c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:13.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6473" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:13.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-2ec457d8-6ecb-4544-9473-3982239a1ed2 STEP: Creating a pod to test consume configMaps Apr 2 22:22:14.035: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374" in namespace "projected-1970" to be "success or failure" Apr 2 22:22:14.064: INFO: Pod "pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374": Phase="Pending", Reason="", readiness=false. Elapsed: 29.004752ms Apr 2 22:22:16.068: INFO: Pod "pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033098338s Apr 2 22:22:18.072: INFO: Pod "pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036957542s STEP: Saw pod success Apr 2 22:22:18.072: INFO: Pod "pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374" satisfied condition "success or failure" Apr 2 22:22:18.075: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374 container projected-configmap-volume-test: STEP: delete the pod Apr 2 22:22:18.122: INFO: Waiting for pod pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374 to disappear Apr 2 22:22:18.126: INFO: Pod pod-projected-configmaps-cc4bfa02-372b-49c6-9ad4-e124f3a3f374 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1970" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4494,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:18.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 22:22:19.161: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 22:22:21.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462939, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462939, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462939, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721462939, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 22:22:24.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:24.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8644" for this suite. STEP: Destroying namespace "webhook-8644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":276,"skipped":4500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:24.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2374e0ee-aea4-4d83-a438-f329ca0f7d2b STEP: Creating a pod to test consume secrets Apr 2 22:22:24.896: INFO: Waiting up to 5m0s for pod "pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb" in namespace "secrets-9996" to be "success or failure" Apr 2 22:22:24.908: INFO: Pod "pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.797567ms Apr 2 22:22:26.914: INFO: Pod "pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017986994s Apr 2 22:22:28.918: INFO: Pod "pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022069546s STEP: Saw pod success Apr 2 22:22:28.918: INFO: Pod "pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb" satisfied condition "success or failure" Apr 2 22:22:28.922: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb container secret-volume-test: STEP: delete the pod Apr 2 22:22:28.940: INFO: Waiting for pod pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb to disappear Apr 2 22:22:28.944: INFO: Pod pod-secrets-b9d0534e-4fe9-41a3-b2ee-4e4a660c72bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:28.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9996" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4534,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 2 22:22:28.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 22:22:28.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9597' Apr 2 22:22:29.122: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 2 22:22:29.122: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Apr 2 22:22:31.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9597' Apr 2 22:22:31.317: INFO: stderr: "" Apr 2 22:22:31.317: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 2 22:22:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9597" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":278,"skipped":4540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSApr 2 22:22:31.324: INFO: Running AfterSuite actions on all nodes Apr 2 22:22:31.324: INFO: Running AfterSuite actions on node 1 Apr 2 22:22:31.324: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4512.780 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS